Test Report: Docker_Linux_crio 21773

                    
                      8990789ccd20605bfce25419a1a009c7a75246f6:2025-10-20:41995
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 14.1
36 TestAddons/parallel/RegistryCreds 0.47
37 TestAddons/parallel/Ingress 146.28
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 39.11
42 TestAddons/parallel/Headlamp 2.5
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 7.28
45 TestAddons/parallel/NvidiaDevicePlugin 5.24
46 TestAddons/parallel/Yakd 5.26
47 TestAddons/parallel/AmdGpuDevicePlugin 5.24
98 TestFunctional/parallel/ServiceCmdConnect 602.97
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.54
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.52
155 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 1.98
197 TestJSONOutput/unpause/Command 1.45
292 TestPause/serial/Pause 5.69
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.1
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.06
310 TestStartStop/group/old-k8s-version/serial/Pause 6.09
316 TestStartStop/group/no-preload/serial/Pause 5.9
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.53
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.29
332 TestStartStop/group/newest-cni/serial/Pause 6.33
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.39
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.59
355 TestStartStop/group/embed-certs/serial/Pause 5.64
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable volcano --alsologtostderr -v=1: exit status 11 (239.244902ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:58:42.948086   24262 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:58:42.948356   24262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:42.948365   24262 out.go:374] Setting ErrFile to fd 2...
	I1020 11:58:42.948369   24262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:42.948532   24262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:58:42.948822   24262 mustload.go:65] Loading cluster: addons-053741
	I1020 11:58:42.949133   24262 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:42.949150   24262 addons.go:606] checking whether the cluster is paused
	I1020 11:58:42.949222   24262 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:42.949233   24262 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:58:42.949564   24262 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:58:42.968185   24262 ssh_runner.go:195] Run: systemctl --version
	I1020 11:58:42.968227   24262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:58:42.985980   24262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:58:43.084625   24262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:58:43.084704   24262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:58:43.114472   24262 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:58:43.114491   24262 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:58:43.114495   24262 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:58:43.114498   24262 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:58:43.114501   24262 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:58:43.114504   24262 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:58:43.114507   24262 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:58:43.114509   24262 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:58:43.114513   24262 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:58:43.114519   24262 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:58:43.114523   24262 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:58:43.114527   24262 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:58:43.114531   24262 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:58:43.114535   24262 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:58:43.114539   24262 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:58:43.114561   24262 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:58:43.114572   24262 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:58:43.114576   24262 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:58:43.114579   24262 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:58:43.114582   24262 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:58:43.114584   24262 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:58:43.114587   24262 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:58:43.114589   24262 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:58:43.114591   24262 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:58:43.114594   24262 cri.go:89] found id: ""
	I1020 11:58:43.114638   24262 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:58:43.129077   24262 out.go:203] 
	W1020 11:58:43.130377   24262 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:58:43.130400   24262 out.go:285] * 
	* 
	W1020 11:58:43.133384   24262 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:58:43.134726   24262 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.169705ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.001814944s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00309738s
addons_test.go:392: (dbg) Run:  kubectl --context addons-053741 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-053741 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-053741 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.654570828s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable registry --alsologtostderr -v=1: exit status 11 (234.734302ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:06.855994   26691 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:06.856462   26691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:06.856478   26691 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:06.856486   26691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:06.856950   26691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:06.857740   26691 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:06.858547   26691 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:06.858576   26691 addons.go:606] checking whether the cluster is paused
	I1020 11:59:06.858708   26691 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:06.858723   26691 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:06.859138   26691 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:06.876988   26691 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:06.877038   26691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:06.894270   26691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:06.992267   26691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:06.992355   26691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:07.021763   26691 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:07.021797   26691 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:07.021803   26691 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:07.021808   26691 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:07.021812   26691 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:07.021817   26691 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:07.021821   26691 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:07.021825   26691 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:07.021829   26691 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:07.021842   26691 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:07.021847   26691 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:07.021849   26691 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:07.021852   26691 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:07.021854   26691 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:07.021857   26691 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:07.021862   26691 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:07.021866   26691 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:07.021870   26691 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:07.021873   26691 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:07.021875   26691 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:07.021877   26691 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:07.021879   26691 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:07.021882   26691 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:07.021884   26691 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:07.021887   26691 cri.go:89] found id: ""
	I1020 11:59:07.021921   26691 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:07.035574   26691 out.go:203] 
	W1020 11:59:07.036856   26691 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:07.036872   26691 out.go:285] * 
	* 
	W1020 11:59:07.039826   26691 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:07.041023   26691 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.10s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.342002ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-053741
addons_test.go:332: (dbg) Run:  kubectl --context addons-053741 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (280.600037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:10.970827   27439 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:10.971172   27439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:10.971187   27439 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:10.971193   27439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:10.971544   27439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:10.971903   27439 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:10.972536   27439 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:10.972559   27439 addons.go:606] checking whether the cluster is paused
	I1020 11:59:10.972699   27439 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:10.972716   27439 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:10.973458   27439 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:10.995339   27439 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:10.995405   27439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:11.017510   27439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:11.127193   27439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:11.127290   27439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:11.165608   27439 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:11.165650   27439 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:11.165656   27439 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:11.165661   27439 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:11.165666   27439 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:11.165671   27439 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:11.165675   27439 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:11.165678   27439 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:11.165682   27439 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:11.165705   27439 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:11.165715   27439 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:11.165719   27439 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:11.165724   27439 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:11.165728   27439 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:11.165732   27439 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:11.165746   27439 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:11.165754   27439 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:11.165760   27439 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:11.165763   27439 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:11.165767   27439 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:11.165784   27439 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:11.165788   27439 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:11.165792   27439 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:11.165796   27439 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:11.165800   27439 cri.go:89] found id: ""
	I1020 11:59:11.165854   27439 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:11.184304   27439 out.go:203] 
	W1020 11:59:11.185949   27439 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:11.185968   27439 out.go:285] * 
	* 
	W1020 11:59:11.190407   27439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:11.192339   27439 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-053741 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-053741 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-053741 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2f7daa89-7fef-4c30-9fe4-02fe5d82e022] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2f7daa89-7fef-4c30-9fe4-02fe5d82e022] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002781621s
I1020 11:59:12.956309   14592 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.583589263s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-053741 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-053741
helpers_test.go:243: (dbg) docker inspect addons-053741:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc",
	        "Created": "2025-10-20T11:56:32.897693096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16557,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T11:56:32.932133936Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/hosts",
	        "LogPath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc-json.log",
	        "Name": "/addons-053741",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-053741:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-053741",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc",
	                "LowerDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-053741",
	                "Source": "/var/lib/docker/volumes/addons-053741/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-053741",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-053741",
	                "name.minikube.sigs.k8s.io": "addons-053741",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3746cc6c75059c5c031ae9b0f2b8b0f935f28fde031ed5d409712924ccadc61e",
	            "SandboxKey": "/var/run/docker/netns/3746cc6c7505",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-053741": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:b1:d2:4c:50:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "af24c59a8b3649aed66b6500324487830ea6dc59f069d7c296b0e8ad05150727",
	                    "EndpointID": "4cb647936ac949ee914ddf3904d1f79047f4c63c913e9a5ed7835c6544c9681d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-053741",
	                        "e4704220315b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-053741 -n addons-053741
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-053741 logs -n 25: (1.181268655s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-253279 --alsologtostderr --binary-mirror http://127.0.0.1:42287 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-253279 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ delete  │ -p binary-mirror-253279                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-253279 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ addons  │ disable dashboard -p addons-053741                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-053741                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ start   │ -p addons-053741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:58 UTC │
	│ addons  │ addons-053741 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ addons-053741 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-053741 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ addons-053741 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ addons-053741 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ addons-053741 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ addons-053741 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ addons  │ addons-053741 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ ip      │ addons-053741 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │ 20 Oct 25 11:59 UTC │
	│ addons  │ addons-053741 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ ssh     │ addons-053741 ssh cat /opt/local-path-provisioner/pvc-cd57e4c8-185c-4547-ba57-5ad9deb884da_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │ 20 Oct 25 11:59 UTC │
	│ addons  │ addons-053741 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-053741                                                                                                                                                                                                                                                                                                                                                                                           │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │ 20 Oct 25 11:59 UTC │
	│ addons  │ addons-053741 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ addons  │ addons-053741 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ ssh     │ addons-053741 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ addons  │ addons-053741 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ addons  │ addons-053741 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ addons  │ addons-053741 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 11:59 UTC │                     │
	│ ip      │ addons-053741 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-053741        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:56:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:56:08.612168   15900 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:56:08.612405   15900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:08.612413   15900 out.go:374] Setting ErrFile to fd 2...
	I1020 11:56:08.612417   15900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:08.612604   15900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:56:08.613147   15900 out.go:368] Setting JSON to false
	I1020 11:56:08.613922   15900 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2318,"bootTime":1760959051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:56:08.614006   15900 start.go:141] virtualization: kvm guest
	I1020 11:56:08.616230   15900 out.go:179] * [addons-053741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 11:56:08.617876   15900 notify.go:220] Checking for updates...
	I1020 11:56:08.617903   15900 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 11:56:08.619578   15900 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:56:08.621112   15900 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 11:56:08.622473   15900 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 11:56:08.623967   15900 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 11:56:08.625562   15900 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 11:56:08.627114   15900 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:56:08.650451   15900 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 11:56:08.650537   15900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:56:08.707089   15900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-20 11:56:08.697856355 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:56:08.707190   15900 docker.go:318] overlay module found
	I1020 11:56:08.709194   15900 out.go:179] * Using the docker driver based on user configuration
	I1020 11:56:08.710436   15900 start.go:305] selected driver: docker
	I1020 11:56:08.710450   15900 start.go:925] validating driver "docker" against <nil>
	I1020 11:56:08.710460   15900 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 11:56:08.711011   15900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:56:08.772032   15900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-20 11:56:08.762946483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:56:08.772250   15900 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:56:08.772446   15900 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 11:56:08.774323   15900 out.go:179] * Using Docker driver with root privileges
	I1020 11:56:08.775754   15900 cni.go:84] Creating CNI manager for ""
	I1020 11:56:08.775840   15900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:56:08.775855   15900 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 11:56:08.775917   15900 start.go:349] cluster config:
	{Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1020 11:56:08.777626   15900 out.go:179] * Starting "addons-053741" primary control-plane node in "addons-053741" cluster
	I1020 11:56:08.779161   15900 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 11:56:08.780552   15900 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 11:56:08.782012   15900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:56:08.782057   15900 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 11:56:08.782068   15900 cache.go:58] Caching tarball of preloaded images
	I1020 11:56:08.782089   15900 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 11:56:08.782172   15900 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 11:56:08.782187   15900 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 11:56:08.782544   15900 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/config.json ...
	I1020 11:56:08.782573   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/config.json: {Name:mka0af212ef52bccd2f81f1166643cbe60e0e889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:08.799003   15900 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 11:56:08.799153   15900 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 11:56:08.799172   15900 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1020 11:56:08.799176   15900 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1020 11:56:08.799183   15900 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1020 11:56:08.799191   15900 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1020 11:56:21.400635   15900 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1020 11:56:21.400671   15900 cache.go:232] Successfully downloaded all kic artifacts
	I1020 11:56:21.400711   15900 start.go:360] acquireMachinesLock for addons-053741: {Name:mkcdccf6181f0e4e87f181300157c2558692b419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:56:21.400829   15900 start.go:364] duration metric: took 99.997µs to acquireMachinesLock for "addons-053741"
	I1020 11:56:21.400855   15900 start.go:93] Provisioning new machine with config: &{Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 11:56:21.400920   15900 start.go:125] createHost starting for "" (driver="docker")
	I1020 11:56:21.403625   15900 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1020 11:56:21.403881   15900 start.go:159] libmachine.API.Create for "addons-053741" (driver="docker")
	I1020 11:56:21.403914   15900 client.go:168] LocalClient.Create starting
	I1020 11:56:21.404040   15900 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 11:56:21.569633   15900 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 11:56:21.629718   15900 cli_runner.go:164] Run: docker network inspect addons-053741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 11:56:21.647044   15900 cli_runner.go:211] docker network inspect addons-053741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 11:56:21.647138   15900 network_create.go:284] running [docker network inspect addons-053741] to gather additional debugging logs...
	I1020 11:56:21.647163   15900 cli_runner.go:164] Run: docker network inspect addons-053741
	W1020 11:56:21.663226   15900 cli_runner.go:211] docker network inspect addons-053741 returned with exit code 1
	I1020 11:56:21.663255   15900 network_create.go:287] error running [docker network inspect addons-053741]: docker network inspect addons-053741: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-053741 not found
	I1020 11:56:21.663287   15900 network_create.go:289] output of [docker network inspect addons-053741]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-053741 not found
	
	** /stderr **
	I1020 11:56:21.663432   15900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 11:56:21.680812   15900 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc2990}
	I1020 11:56:21.680846   15900 network_create.go:124] attempt to create docker network addons-053741 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1020 11:56:21.680893   15900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-053741 addons-053741
	I1020 11:56:21.738321   15900 network_create.go:108] docker network addons-053741 192.168.49.0/24 created
	I1020 11:56:21.738351   15900 kic.go:121] calculated static IP "192.168.49.2" for the "addons-053741" container
	I1020 11:56:21.738416   15900 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 11:56:21.755352   15900 cli_runner.go:164] Run: docker volume create addons-053741 --label name.minikube.sigs.k8s.io=addons-053741 --label created_by.minikube.sigs.k8s.io=true
	I1020 11:56:21.773712   15900 oci.go:103] Successfully created a docker volume addons-053741
	I1020 11:56:21.773815   15900 cli_runner.go:164] Run: docker run --rm --name addons-053741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053741 --entrypoint /usr/bin/test -v addons-053741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 11:56:28.454707   15900 cli_runner.go:217] Completed: docker run --rm --name addons-053741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053741 --entrypoint /usr/bin/test -v addons-053741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.68084714s)
	I1020 11:56:28.454737   15900 oci.go:107] Successfully prepared a docker volume addons-053741
	I1020 11:56:28.454758   15900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:56:28.454793   15900 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 11:56:28.454852   15900 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 11:56:32.822532   15900 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.367634417s)
	I1020 11:56:32.822565   15900 kic.go:203] duration metric: took 4.367769587s to extract preloaded images to volume ...
	W1020 11:56:32.822646   15900 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 11:56:32.822674   15900 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 11:56:32.822704   15900 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 11:56:32.880379   15900 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-053741 --name addons-053741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-053741 --network addons-053741 --ip 192.168.49.2 --volume addons-053741:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 11:56:33.177594   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Running}}
	I1020 11:56:33.198330   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:33.216361   15900 cli_runner.go:164] Run: docker exec addons-053741 stat /var/lib/dpkg/alternatives/iptables
	I1020 11:56:33.269151   15900 oci.go:144] the created container "addons-053741" has a running status.
	I1020 11:56:33.269191   15900 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa...
	I1020 11:56:33.364299   15900 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 11:56:33.390827   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:33.410237   15900 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 11:56:33.410262   15900 kic_runner.go:114] Args: [docker exec --privileged addons-053741 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 11:56:33.467110   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:33.485438   15900 machine.go:93] provisionDockerMachine start ...
	I1020 11:56:33.485546   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:33.510914   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:33.511236   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:33.511261   15900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 11:56:33.511996   15900 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60374->127.0.0.1:32768: read: connection reset by peer
	I1020 11:56:36.653764   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-053741
	
	I1020 11:56:36.653805   15900 ubuntu.go:182] provisioning hostname "addons-053741"
	I1020 11:56:36.653877   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:36.672258   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:36.672467   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:36.672479   15900 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-053741 && echo "addons-053741" | sudo tee /etc/hostname
	I1020 11:56:36.820309   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-053741
	
	I1020 11:56:36.820382   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:36.838814   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:36.839024   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:36.839063   15900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-053741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-053741/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-053741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 11:56:36.979642   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 11:56:36.979674   15900 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 11:56:36.979741   15900 ubuntu.go:190] setting up certificates
	I1020 11:56:36.979757   15900 provision.go:84] configureAuth start
	I1020 11:56:36.979858   15900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053741
	I1020 11:56:36.997801   15900 provision.go:143] copyHostCerts
	I1020 11:56:36.997873   15900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 11:56:36.998026   15900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 11:56:36.998295   15900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 11:56:36.998453   15900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.addons-053741 san=[127.0.0.1 192.168.49.2 addons-053741 localhost minikube]
	I1020 11:56:37.209917   15900 provision.go:177] copyRemoteCerts
	I1020 11:56:37.209974   15900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 11:56:37.210008   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.227650   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.327458   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 11:56:37.347387   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 11:56:37.365572   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 11:56:37.382496   15900 provision.go:87] duration metric: took 402.72045ms to configureAuth
	I1020 11:56:37.382522   15900 ubuntu.go:206] setting minikube options for container-runtime
	I1020 11:56:37.382711   15900 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:56:37.382946   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.400314   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:37.400533   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:37.400550   15900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 11:56:37.644520   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 11:56:37.644543   15900 machine.go:96] duration metric: took 4.159082513s to provisionDockerMachine
	I1020 11:56:37.644552   15900 client.go:171] duration metric: took 16.240629628s to LocalClient.Create
	I1020 11:56:37.644569   15900 start.go:167] duration metric: took 16.240689069s to libmachine.API.Create "addons-053741"
	I1020 11:56:37.644576   15900 start.go:293] postStartSetup for "addons-053741" (driver="docker")
	I1020 11:56:37.644588   15900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 11:56:37.644659   15900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 11:56:37.644711   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.662145   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.762963   15900 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 11:56:37.766567   15900 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 11:56:37.766600   15900 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 11:56:37.766611   15900 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 11:56:37.766666   15900 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 11:56:37.766692   15900 start.go:296] duration metric: took 122.111181ms for postStartSetup
	I1020 11:56:37.766987   15900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053741
	I1020 11:56:37.784302   15900 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/config.json ...
	I1020 11:56:37.784566   15900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 11:56:37.784604   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.802863   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.899938   15900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 11:56:37.904427   15900 start.go:128] duration metric: took 16.50349315s to createHost
	I1020 11:56:37.904454   15900 start.go:83] releasing machines lock for "addons-053741", held for 16.503610565s
	I1020 11:56:37.904522   15900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053741
	I1020 11:56:37.923025   15900 ssh_runner.go:195] Run: cat /version.json
	I1020 11:56:37.923086   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.923097   15900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 11:56:37.923153   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.940921   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.941851   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:38.035941   15900 ssh_runner.go:195] Run: systemctl --version
	I1020 11:56:38.092493   15900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 11:56:38.126876   15900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 11:56:38.131693   15900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 11:56:38.131763   15900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 11:56:38.157060   15900 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 11:56:38.157080   15900 start.go:495] detecting cgroup driver to use...
	I1020 11:56:38.157126   15900 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 11:56:38.157169   15900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 11:56:38.173293   15900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 11:56:38.185953   15900 docker.go:218] disabling cri-docker service (if available) ...
	I1020 11:56:38.186005   15900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 11:56:38.201875   15900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 11:56:38.220964   15900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 11:56:38.302230   15900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 11:56:38.387724   15900 docker.go:234] disabling docker service ...
	I1020 11:56:38.387805   15900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 11:56:38.405793   15900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 11:56:38.418484   15900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 11:56:38.499921   15900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 11:56:38.579090   15900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 11:56:38.591470   15900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 11:56:38.605434   15900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 11:56:38.605499   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.616167   15900 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 11:56:38.616237   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.625346   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.634049   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.642541   15900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 11:56:38.650537   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.659161   15900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.672395   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.681063   15900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 11:56:38.688335   15900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1020 11:56:38.688395   15900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1020 11:56:38.700582   15900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 11:56:38.708605   15900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:56:38.787302   15900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 11:56:38.890184   15900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 11:56:38.890254   15900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 11:56:38.894134   15900 start.go:563] Will wait 60s for crictl version
	I1020 11:56:38.894182   15900 ssh_runner.go:195] Run: which crictl
	I1020 11:56:38.897679   15900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 11:56:38.920641   15900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 11:56:38.920744   15900 ssh_runner.go:195] Run: crio --version
	I1020 11:56:38.946597   15900 ssh_runner.go:195] Run: crio --version
	I1020 11:56:38.974916   15900 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 11:56:38.976447   15900 cli_runner.go:164] Run: docker network inspect addons-053741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 11:56:38.993556   15900 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1020 11:56:38.997500   15900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 11:56:39.007633   15900 kubeadm.go:883] updating cluster {Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 11:56:39.007742   15900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:56:39.007803   15900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 11:56:39.038259   15900 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 11:56:39.038278   15900 crio.go:433] Images already preloaded, skipping extraction
	I1020 11:56:39.038326   15900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 11:56:39.063265   15900 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 11:56:39.063292   15900 cache_images.go:85] Images are preloaded, skipping loading
	I1020 11:56:39.063299   15900 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1020 11:56:39.063389   15900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-053741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 11:56:39.063470   15900 ssh_runner.go:195] Run: crio config
	I1020 11:56:39.108082   15900 cni.go:84] Creating CNI manager for ""
	I1020 11:56:39.108106   15900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:56:39.108131   15900 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 11:56:39.108153   15900 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-053741 NodeName:addons-053741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 11:56:39.108271   15900 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-053741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 11:56:39.108329   15900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 11:56:39.116479   15900 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 11:56:39.116540   15900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 11:56:39.123925   15900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1020 11:56:39.135888   15900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 11:56:39.151125   15900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1020 11:56:39.163397   15900 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1020 11:56:39.167046   15900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 11:56:39.177028   15900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:56:39.252344   15900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 11:56:39.278283   15900 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741 for IP: 192.168.49.2
	I1020 11:56:39.278305   15900 certs.go:195] generating shared ca certs ...
	I1020 11:56:39.278328   15900 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.278440   15900 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 11:56:39.633836   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt ...
	I1020 11:56:39.633867   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt: {Name:mkd4283c49b35ab0b046ccb70ad96bfdc7ba8c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.634042   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key ...
	I1020 11:56:39.634058   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key: {Name:mk854c3edcef668e8b0061c2f1cf9591ba30304d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.634132   15900 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 11:56:39.896741   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt ...
	I1020 11:56:39.896780   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt: {Name:mkb7f4b59907f6c15f36fa85b6156fd4fe57bd77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.896944   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key ...
	I1020 11:56:39.896955   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key: {Name:mk76fe7029c9c20baac31bcfd9c786c4cca764ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.897031   15900 certs.go:257] generating profile certs ...
	I1020 11:56:39.897093   15900 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.key
	I1020 11:56:39.897107   15900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt with IP's: []
	I1020 11:56:40.165271   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt ...
	I1020 11:56:40.165301   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: {Name:mka976861148c42dfdc0036143c0f4cd4cb6de63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.165467   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.key ...
	I1020 11:56:40.165478   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.key: {Name:mk900e14c5d3af0870911210416b9178e8d9a8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.165551   15900 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13
	I1020 11:56:40.165570   15900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1020 11:56:40.409912   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13 ...
	I1020 11:56:40.409942   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13: {Name:mk3449df9f6180b42abef687c645d7f336841e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.410106   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13 ...
	I1020 11:56:40.410119   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13: {Name:mk39b06ded4a76287ad1d835919fb11a8bb60bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.411166   15900 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt
	I1020 11:56:40.411276   15900 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key
	I1020 11:56:40.411331   15900 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key
	I1020 11:56:40.411350   15900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt with IP's: []
	I1020 11:56:41.106717   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt ...
	I1020 11:56:41.106745   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt: {Name:mkd51af5fe344c3ecc6fa772d38f7b9edd844154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:41.106915   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key ...
	I1020 11:56:41.106927   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key: {Name:mkf4e7d0b0d92d3a97ffca1208025fcf09fe71cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:41.107108   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 11:56:41.107148   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 11:56:41.107171   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 11:56:41.107200   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 11:56:41.107762   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 11:56:41.125227   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 11:56:41.142141   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 11:56:41.159089   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 11:56:41.175606   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 11:56:41.192389   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 11:56:41.209029   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 11:56:41.225730   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 11:56:41.242056   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 11:56:41.260749   15900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 11:56:41.272580   15900 ssh_runner.go:195] Run: openssl version
	I1020 11:56:41.278356   15900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 11:56:41.288871   15900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:56:41.292455   15900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:56:41.292504   15900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:56:41.326132   15900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 11:56:41.334786   15900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 11:56:41.338479   15900 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 11:56:41.338535   15900 kubeadm.go:400] StartCluster: {Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 11:56:41.338628   15900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:56:41.338699   15900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:56:41.363960   15900 cri.go:89] found id: ""
	I1020 11:56:41.364025   15900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 11:56:41.372275   15900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 11:56:41.380307   15900 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 11:56:41.380363   15900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 11:56:41.388387   15900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 11:56:41.388402   15900 kubeadm.go:157] found existing configuration files:
	
	I1020 11:56:41.388441   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 11:56:41.396109   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 11:56:41.396167   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 11:56:41.403569   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 11:56:41.411757   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 11:56:41.411847   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 11:56:41.419571   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 11:56:41.427339   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 11:56:41.427399   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 11:56:41.434628   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 11:56:41.442111   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 11:56:41.442158   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 11:56:41.449535   15900 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 11:56:41.485364   15900 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 11:56:41.485431   15900 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 11:56:41.518569   15900 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 11:56:41.518664   15900 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 11:56:41.518741   15900 kubeadm.go:318] OS: Linux
	I1020 11:56:41.518847   15900 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 11:56:41.518945   15900 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 11:56:41.519023   15900 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 11:56:41.519116   15900 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 11:56:41.519186   15900 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 11:56:41.519275   15900 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 11:56:41.519366   15900 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 11:56:41.519449   15900 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 11:56:41.576131   15900 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 11:56:41.576300   15900 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 11:56:41.576455   15900 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 11:56:41.582894   15900 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 11:56:41.585213   15900 out.go:252]   - Generating certificates and keys ...
	I1020 11:56:41.585329   15900 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 11:56:41.585445   15900 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 11:56:41.755942   15900 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 11:56:41.890346   15900 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 11:56:42.138384   15900 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 11:56:42.584259   15900 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 11:56:42.787960   15900 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 11:56:42.788074   15900 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-053741 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 11:56:42.854691   15900 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 11:56:42.854834   15900 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-053741 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 11:56:43.062204   15900 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 11:56:43.203840   15900 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 11:56:43.800169   15900 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 11:56:43.800261   15900 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 11:56:43.886198   15900 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 11:56:44.496897   15900 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 11:56:44.744014   15900 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 11:56:45.092185   15900 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 11:56:45.287755   15900 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 11:56:45.288257   15900 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 11:56:45.291983   15900 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 11:56:45.293807   15900 out.go:252]   - Booting up control plane ...
	I1020 11:56:45.293913   15900 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 11:56:45.294013   15900 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 11:56:45.294567   15900 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 11:56:45.322076   15900 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 11:56:45.322226   15900 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 11:56:45.328966   15900 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 11:56:45.329127   15900 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 11:56:45.329217   15900 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 11:56:45.426306   15900 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 11:56:45.426451   15900 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 11:56:45.928113   15900 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.845905ms
	I1020 11:56:45.931944   15900 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 11:56:45.932080   15900 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1020 11:56:45.932199   15900 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 11:56:45.932328   15900 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 11:56:47.702110   15900 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.769332644s
	I1020 11:56:47.972060   15900 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.040133633s
	I1020 11:56:49.433965   15900 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501999515s
	I1020 11:56:49.444945   15900 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 11:56:49.455143   15900 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 11:56:49.463026   15900 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 11:56:49.463303   15900 kubeadm.go:318] [mark-control-plane] Marking the node addons-053741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 11:56:49.471015   15900 kubeadm.go:318] [bootstrap-token] Using token: z27odz.nb33zoome7hq0gb4
	I1020 11:56:49.472384   15900 out.go:252]   - Configuring RBAC rules ...
	I1020 11:56:49.472533   15900 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 11:56:49.474999   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 11:56:49.479729   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 11:56:49.482295   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 11:56:49.484625   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 11:56:49.488132   15900 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 11:56:49.840233   15900 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 11:56:50.268040   15900 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 11:56:50.840597   15900 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 11:56:50.841641   15900 kubeadm.go:318] 
	I1020 11:56:50.841719   15900 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 11:56:50.841727   15900 kubeadm.go:318] 
	I1020 11:56:50.841826   15900 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 11:56:50.841836   15900 kubeadm.go:318] 
	I1020 11:56:50.841857   15900 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 11:56:50.841910   15900 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 11:56:50.841952   15900 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 11:56:50.841958   15900 kubeadm.go:318] 
	I1020 11:56:50.842000   15900 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 11:56:50.842006   15900 kubeadm.go:318] 
	I1020 11:56:50.842043   15900 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 11:56:50.842049   15900 kubeadm.go:318] 
	I1020 11:56:50.842095   15900 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 11:56:50.842162   15900 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 11:56:50.842218   15900 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 11:56:50.842224   15900 kubeadm.go:318] 
	I1020 11:56:50.842294   15900 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 11:56:50.842388   15900 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 11:56:50.842408   15900 kubeadm.go:318] 
	I1020 11:56:50.842534   15900 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z27odz.nb33zoome7hq0gb4 \
	I1020 11:56:50.842639   15900 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 11:56:50.842659   15900 kubeadm.go:318] 	--control-plane 
	I1020 11:56:50.842663   15900 kubeadm.go:318] 
	I1020 11:56:50.842791   15900 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 11:56:50.842800   15900 kubeadm.go:318] 
	I1020 11:56:50.842867   15900 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z27odz.nb33zoome7hq0gb4 \
	I1020 11:56:50.842959   15900 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 11:56:50.845031   15900 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 11:56:50.845135   15900 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 11:56:50.845159   15900 cni.go:84] Creating CNI manager for ""
	I1020 11:56:50.845180   15900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:56:50.847478   15900 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 11:56:50.848702   15900 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 11:56:50.852848   15900 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 11:56:50.852865   15900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 11:56:50.865320   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 11:56:51.068632   15900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 11:56:51.068736   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:51.068816   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-053741 minikube.k8s.io/updated_at=2025_10_20T11_56_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=addons-053741 minikube.k8s.io/primary=true
	I1020 11:56:51.144747   15900 ops.go:34] apiserver oom_adj: -16
	I1020 11:56:51.144783   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:51.645544   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:52.145575   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:52.645093   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:53.145034   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:53.645438   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:54.145021   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:54.645657   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:55.145874   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:55.645922   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:56.144876   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:56.206916   15900 kubeadm.go:1113] duration metric: took 5.138235066s to wait for elevateKubeSystemPrivileges
	I1020 11:56:56.206956   15900 kubeadm.go:402] duration metric: took 14.86842521s to StartCluster
	I1020 11:56:56.206977   15900 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:56.207133   15900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 11:56:56.207624   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:56.207816   15900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 11:56:56.207871   15900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 11:56:56.207915   15900 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1020 11:56:56.208033   15900 addons.go:69] Setting yakd=true in profile "addons-053741"
	I1020 11:56:56.208040   15900 addons.go:69] Setting gcp-auth=true in profile "addons-053741"
	I1020 11:56:56.208076   15900 mustload.go:65] Loading cluster: addons-053741
	I1020 11:56:56.208078   15900 addons.go:238] Setting addon yakd=true in "addons-053741"
	I1020 11:56:56.208112   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208104   15900 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-053741"
	I1020 11:56:56.208121   15900 addons.go:69] Setting registry=true in profile "addons-053741"
	I1020 11:56:56.208156   15900 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-053741"
	I1020 11:56:56.208162   15900 addons.go:238] Setting addon registry=true in "addons-053741"
	I1020 11:56:56.208170   15900 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:56:56.208222   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208227   15900 addons.go:69] Setting volcano=true in profile "addons-053741"
	I1020 11:56:56.208239   15900 addons.go:69] Setting cloud-spanner=true in profile "addons-053741"
	I1020 11:56:56.208250   15900 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-053741"
	I1020 11:56:56.208255   15900 addons.go:238] Setting addon cloud-spanner=true in "addons-053741"
	I1020 11:56:56.208270   15900 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:56:56.208290   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208308   15900 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-053741"
	I1020 11:56:56.208330   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208365   15900 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-053741"
	I1020 11:56:56.208411   15900 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-053741"
	I1020 11:56:56.208444   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208473   15900 addons.go:69] Setting volumesnapshots=true in profile "addons-053741"
	I1020 11:56:56.208503   15900 addons.go:238] Setting addon volumesnapshots=true in "addons-053741"
	I1020 11:56:56.208528   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208559   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208567   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208642   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208712   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208729   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208929   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208963   15900 addons.go:69] Setting inspektor-gadget=true in profile "addons-053741"
	I1020 11:56:56.208985   15900 addons.go:238] Setting addon inspektor-gadget=true in "addons-053741"
	I1020 11:56:56.209006   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.209019   15900 addons.go:69] Setting ingress=true in profile "addons-053741"
	I1020 11:56:56.209036   15900 addons.go:238] Setting addon ingress=true in "addons-053741"
	I1020 11:56:56.209057   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.209375   15900 addons.go:69] Setting ingress-dns=true in profile "addons-053741"
	I1020 11:56:56.209420   15900 addons.go:238] Setting addon ingress-dns=true in "addons-053741"
	I1020 11:56:56.209454   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.209950   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.210140   15900 out.go:179] * Verifying Kubernetes components...
	I1020 11:56:56.208241   15900 addons.go:238] Setting addon volcano=true in "addons-053741"
	I1020 11:56:56.210482   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.210609   15900 addons.go:69] Setting storage-provisioner=true in profile "addons-053741"
	I1020 11:56:56.211350   15900 addons.go:238] Setting addon storage-provisioner=true in "addons-053741"
	I1020 11:56:56.211386   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208929   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.209011   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.211875   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.210966   15900 addons.go:69] Setting default-storageclass=true in profile "addons-053741"
	I1020 11:56:56.212056   15900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-053741"
	I1020 11:56:56.210997   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208951   15900 addons.go:69] Setting registry-creds=true in profile "addons-053741"
	I1020 11:56:56.212368   15900 addons.go:238] Setting addon registry-creds=true in "addons-053741"
	I1020 11:56:56.212402   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.212865   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.212989   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.213146   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.214797   15900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:56:56.211104   15900 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-053741"
	I1020 11:56:56.215185   15900 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-053741"
	I1020 11:56:56.215227   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.211113   15900 addons.go:69] Setting metrics-server=true in profile "addons-053741"
	I1020 11:56:56.215441   15900 addons.go:238] Setting addon metrics-server=true in "addons-053741"
	I1020 11:56:56.215475   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.215767   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.216284   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.221416   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.264033   15900 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1020 11:56:56.265376   15900 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 11:56:56.265396   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1020 11:56:56.265459   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.265644   15900 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1020 11:56:56.266857   15900 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1020 11:56:56.268164   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1020 11:56:56.268214   15900 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1020 11:56:56.268270   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.268304   15900 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 11:56:56.268313   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1020 11:56:56.268361   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.281524   15900 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-053741"
	I1020 11:56:56.281610   15900 host.go:66] Checking if "addons-053741" exists ...
	W1020 11:56:56.292947   15900 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1020 11:56:56.296294   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.298994   15900 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1020 11:56:56.300289   15900 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 11:56:56.300351   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 11:56:56.300592   15900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1020 11:56:56.300689   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.301937   15900 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 11:56:56.301956   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 11:56:56.302022   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.302304   15900 addons.go:238] Setting addon default-storageclass=true in "addons-053741"
	I1020 11:56:56.302344   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.302955   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.307626   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1020 11:56:56.308939   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.308957   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1020 11:56:56.308975   15900 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1020 11:56:56.309062   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.311968   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1020 11:56:56.312143   15900 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1020 11:56:56.312172   15900 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1020 11:56:56.317195   15900 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1020 11:56:56.317450   15900 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1020 11:56:56.317468   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1020 11:56:56.317532   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.317724   15900 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 11:56:56.318202   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1020 11:56:56.318266   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.321932   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1020 11:56:56.322465   15900 out.go:179]   - Using image docker.io/registry:3.0.0
	I1020 11:56:56.324237   15900 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1020 11:56:56.324261   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1020 11:56:56.324329   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.324498   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1020 11:56:56.326412   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1020 11:56:56.327808   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1020 11:56:56.329224   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1020 11:56:56.330426   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1020 11:56:56.331853   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1020 11:56:56.335076   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1020 11:56:56.335095   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1020 11:56:56.335161   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.352956   15900 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1020 11:56:56.354155   15900 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 11:56:56.354177   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1020 11:56:56.354249   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.354448   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.360832   15900 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1020 11:56:56.360909   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.366059   15900 out.go:179]   - Using image docker.io/busybox:stable
	I1020 11:56:56.367924   15900 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 11:56:56.368051   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1020 11:56:56.368115   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.370948   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.373639   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:56:56.375513   15900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 11:56:56.377151   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1020 11:56:56.378490   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:56:56.379229   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.380117   15900 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 11:56:56.381836   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1020 11:56:56.381974   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.383336   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.385032   15900 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1020 11:56:56.386593   15900 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1020 11:56:56.386612   15900 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1020 11:56:56.386672   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.388876   15900 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 11:56:56.388896   15900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 11:56:56.388944   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.389177   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.397648   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.399763   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.420010   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.420562   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.425997   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.428788   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.438384   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.447953   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	W1020 11:56:56.451813   15900 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1020 11:56:56.451851   15900 retry.go:31] will retry after 165.090909ms: ssh: handshake failed: EOF
	I1020 11:56:56.459026   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.459861   15900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 11:56:56.538299   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1020 11:56:56.538327   15900 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1020 11:56:56.539120   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 11:56:56.539140   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1020 11:56:56.556590   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1020 11:56:56.556618   15900 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1020 11:56:56.558157   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 11:56:56.558177   15900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1020 11:56:56.559648   15900 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1020 11:56:56.559669   15900 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1020 11:56:56.575000   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 11:56:56.575460   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 11:56:56.578551   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 11:56:56.585381   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 11:56:56.588684   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 11:56:56.593480   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 11:56:56.593503   15900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1020 11:56:56.596104   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1020 11:56:56.596128   15900 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1020 11:56:56.598673   15900 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1020 11:56:56.598695   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1020 11:56:56.601674   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1020 11:56:56.601696   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1020 11:56:56.603298   15900 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1020 11:56:56.603315   15900 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1020 11:56:56.610341   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1020 11:56:56.612819   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 11:56:56.613942   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 11:56:56.631547   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 11:56:56.640752   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 11:56:56.652713   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1020 11:56:56.652744   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1020 11:56:56.654763   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1020 11:56:56.654807   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1020 11:56:56.656734   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1020 11:56:56.664368   15900 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1020 11:56:56.664412   15900 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1020 11:56:56.695428   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1020 11:56:56.718978   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1020 11:56:56.719013   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1020 11:56:56.729581   15900 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1020 11:56:56.729606   15900 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1020 11:56:56.780661   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1020 11:56:56.780687   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1020 11:56:56.785588   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1020 11:56:56.785616   15900 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1020 11:56:56.839235   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1020 11:56:56.839266   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1020 11:56:56.851908   15900 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:56:56.851979   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1020 11:56:56.862904   15900 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:56.862981   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1020 11:56:56.865097   15900 node_ready.go:35] waiting up to 6m0s for node "addons-053741" to be "Ready" ...
	I1020 11:56:56.865840   15900 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1020 11:56:56.915368   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:56:56.916939   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1020 11:56:56.916962   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1020 11:56:56.943591   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:56.969926   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1020 11:56:56.969953   15900 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1020 11:56:57.028393   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1020 11:56:57.028418   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1020 11:56:57.078629   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1020 11:56:57.078654   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1020 11:56:57.110148   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 11:56:57.110177   15900 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1020 11:56:57.150813   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 11:56:57.374763   15900 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-053741" context rescaled to 1 replicas
	W1020 11:56:57.502373   15900 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1020 11:56:57.700537   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.068945002s)
	I1020 11:56:57.700585   15900 addons.go:479] Verifying addon ingress=true in "addons-053741"
	I1020 11:56:57.700689   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.043930794s)
	I1020 11:56:57.700717   15900 addons.go:479] Verifying addon registry=true in "addons-053741"
	I1020 11:56:57.700647   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.059830793s)
	I1020 11:56:57.700756   15900 addons.go:479] Verifying addon metrics-server=true in "addons-053741"
	I1020 11:56:57.700757   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.005292254s)
	I1020 11:56:57.703231   15900 out.go:179] * Verifying ingress addon...
	I1020 11:56:57.703236   15900 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-053741 service yakd-dashboard -n yakd-dashboard
	
	I1020 11:56:57.703232   15900 out.go:179] * Verifying registry addon...
	I1020 11:56:57.706216   15900 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1020 11:56:57.706251   15900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1020 11:56:57.708373   15900 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 11:56:57.708441   15900 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 11:56:57.708457   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:58.181308   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.265883631s)
	W1020 11:56:58.181375   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 11:56:58.181401   15900 retry.go:31] will retry after 342.405105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 11:56:58.181427   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237798704s)
	W1020 11:56:58.181461   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:58.181478   15900 retry.go:31] will retry after 324.093183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:58.181646   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.030791795s)
	I1020 11:56:58.181671   15900 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-053741"
	I1020 11:56:58.183958   15900 out.go:179] * Verifying csi-hostpath-driver addon...
	I1020 11:56:58.186618   15900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1020 11:56:58.189360   15900 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 11:56:58.189376   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:58.290393   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:58.290464   15900 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 11:56:58.290486   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:56:58.506491   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:58.524402   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:56:58.689518   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:58.709702   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:58.709759   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:56:58.867858   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	W1020 11:56:59.059812   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:59.059858   15900 retry.go:31] will retry after 336.62005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:59.189672   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:59.208897   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:59.209077   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:56:59.396834   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:59.689941   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:59.709627   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:59.709683   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:00.189643   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:00.209168   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:00.209309   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:00.689570   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:00.708886   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:00.709075   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:00.869727   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:01.012464   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.48801625s)
	I1020 11:57:01.012518   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.615648067s)
	W1020 11:57:01.012558   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:01.012587   15900 retry.go:31] will retry after 784.185305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:01.190166   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:01.209841   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:01.209927   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:01.689856   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:01.709433   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:01.709492   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:01.797574   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:02.190568   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:02.209002   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:02.209196   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:02.323839   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:02.323872   15900 retry.go:31] will retry after 1.261898765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:02.690544   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:02.709088   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:02.709158   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:03.189382   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:03.208732   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:03.208863   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:03.368263   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:03.585957   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:03.691285   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:03.709641   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:03.709861   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:03.914793   15900 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1020 11:57:03.914863   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:57:03.935080   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:57:04.047292   15900 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1020 11:57:04.060298   15900 addons.go:238] Setting addon gcp-auth=true in "addons-053741"
	I1020 11:57:04.060357   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:57:04.060910   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:57:04.081596   15900 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1020 11:57:04.081647   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:57:04.099795   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	W1020 11:57:04.122353   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:04.122386   15900 retry.go:31] will retry after 1.737686992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:04.190844   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:04.197300   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:57:04.198686   15900 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1020 11:57:04.199904   15900 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1020 11:57:04.199916   15900 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1020 11:57:04.209751   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:04.209974   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:04.213147   15900 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1020 11:57:04.213160   15900 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1020 11:57:04.225371   15900 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 11:57:04.225387   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1020 11:57:04.237309   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 11:57:04.537986   15900 addons.go:479] Verifying addon gcp-auth=true in "addons-053741"
	I1020 11:57:04.539499   15900 out.go:179] * Verifying gcp-auth addon...
	I1020 11:57:04.541644   15900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1020 11:57:04.543933   15900 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1020 11:57:04.543953   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:04.689652   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:04.708983   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:04.709188   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:05.044415   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:05.189826   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:05.209324   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:05.209382   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:05.544536   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:05.690150   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:05.709804   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:05.709975   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:05.861132   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1020 11:57:05.867979   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:06.045115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:06.190522   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:06.209724   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:06.209800   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:06.391176   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:06.391201   15900 retry.go:31] will retry after 1.786080326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:06.544680   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:06.689394   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:06.708721   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:06.708759   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:07.044989   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:07.189618   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:07.209050   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:07.209150   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:07.545390   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:07.690085   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:07.709800   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:07.710020   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:07.868336   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:08.045033   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:08.178221   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:08.189791   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:08.209509   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:08.209585   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:08.544885   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:08.689800   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:08.708911   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:08.709046   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:08.709272   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:08.709318   15900 retry.go:31] will retry after 3.484695849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:09.044211   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:09.189765   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:09.209428   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:09.209568   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:09.545350   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:09.689946   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:09.709517   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:09.709749   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:10.044945   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:10.189964   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:10.209375   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:10.209501   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:10.368105   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:10.544631   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:10.690037   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:10.709525   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:10.709848   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:11.045176   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:11.189620   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:11.209290   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:11.209462   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:11.544627   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:11.689458   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:11.708975   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:11.709075   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:12.045203   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:12.189867   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:12.194887   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:12.208949   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:12.209090   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:12.369802   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:12.544973   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:12.690140   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:12.709422   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:12.709421   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:12.743433   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:12.743464   15900 retry.go:31] will retry after 3.332044795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:13.045006   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:13.189640   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:13.209491   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:13.209600   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:13.544917   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:13.689743   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:13.709394   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:13.709491   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:14.044746   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:14.189902   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:14.209384   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:14.209546   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:14.544610   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:14.690075   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:14.709558   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:14.709614   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:14.868313   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:15.044951   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:15.189547   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:15.209107   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:15.209334   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:15.544300   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:15.689999   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:15.709572   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:15.709788   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:16.045355   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:16.076495   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:16.190244   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:16.209748   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:16.209936   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:16.544910   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:57:16.607182   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:16.607208   15900 retry.go:31] will retry after 5.617223216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:16.689726   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:16.709364   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:16.709468   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:17.044536   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:17.190208   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:17.209554   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:17.209739   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:17.368282   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:17.544757   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:17.689377   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:17.709091   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:17.709120   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:18.044891   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:18.190180   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:18.209602   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:18.209618   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:18.544848   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:18.690000   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:18.709406   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:18.709488   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:19.045135   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:19.189524   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:19.209191   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:19.209262   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:19.544213   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:19.689723   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:19.709087   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:19.709251   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:19.867916   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:20.044734   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:20.190587   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:20.209056   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:20.209094   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:20.544070   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:20.689956   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:20.709585   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:20.709630   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:21.045222   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:21.190341   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:21.208854   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:21.209003   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:21.544920   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:21.689313   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:21.708517   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:21.708686   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:21.868139   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:22.045147   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:22.189714   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:22.209433   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:22.209556   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:22.224640   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:22.544995   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:22.689083   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:22.708826   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:22.708964   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:22.753099   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:22.753134   15900 retry.go:31] will retry after 6.164580225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:23.044329   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:23.189876   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:23.209472   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:23.209628   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:23.544486   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:23.689046   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:23.709645   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:23.709805   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:24.044707   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:24.189166   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:24.209568   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:24.209606   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:24.368169   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:24.544584   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:24.689887   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:24.709156   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:24.709404   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:25.045150   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:25.189751   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:25.209203   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:25.209391   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:25.544542   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:25.690187   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:25.709634   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:25.709709   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:26.045034   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:26.189880   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:26.209133   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:26.209370   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:26.544738   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:26.689269   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:26.709541   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:26.709668   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:26.868147   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:27.044746   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:27.189291   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:27.208701   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:27.208844   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:27.544972   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:27.689226   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:27.709620   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:27.709795   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:28.044940   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:28.189610   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:28.208887   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:28.209108   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:28.545292   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:28.690054   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:28.709710   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:28.709862   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:28.868259   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:28.918406   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:29.044732   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:29.189204   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:29.209969   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:29.210028   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:29.444724   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:29.444753   15900 retry.go:31] will retry after 13.378716535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:29.544837   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:29.689584   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:29.709148   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:29.709330   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:30.045308   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:30.189714   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:30.209238   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:30.209252   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:30.544152   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:30.689802   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:30.709331   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:30.709548   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:31.044182   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:31.189937   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:31.209400   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:31.209588   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:31.368171   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:31.544584   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:31.689165   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:31.709958   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:31.710028   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:32.045216   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:32.189620   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:32.209144   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:32.209183   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:32.544051   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:32.689797   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:32.709071   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:32.709257   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:33.044113   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:33.189598   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:33.208871   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:33.209073   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:33.368806   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:33.544075   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:33.689674   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:33.709044   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:33.709218   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:34.044312   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:34.189912   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:34.209245   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:34.209450   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:34.544553   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:34.690031   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:34.709486   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:34.709600   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:35.044684   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:35.189153   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:35.209744   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:35.209800   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:35.544803   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:35.689439   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:35.710371   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:35.710551   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:35.867971   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:36.044804   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:36.189310   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:36.209813   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:36.209852   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:36.544905   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:36.689407   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:36.708858   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:36.709019   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:37.045010   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:37.189614   15900 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 11:57:37.189635   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:37.209209   15900 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 11:57:37.209236   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:37.209280   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:37.370397   15900 node_ready.go:49] node "addons-053741" is "Ready"
	I1020 11:57:37.370428   15900 node_ready.go:38] duration metric: took 40.505294162s for node "addons-053741" to be "Ready" ...
	I1020 11:57:37.370442   15900 api_server.go:52] waiting for apiserver process to appear ...
	I1020 11:57:37.370492   15900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 11:57:37.393786   15900 api_server.go:72] duration metric: took 41.18586834s to wait for apiserver process to appear ...
	I1020 11:57:37.393873   15900 api_server.go:88] waiting for apiserver healthz status ...
	I1020 11:57:37.393910   15900 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1020 11:57:37.399317   15900 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1020 11:57:37.400412   15900 api_server.go:141] control plane version: v1.34.1
	I1020 11:57:37.400440   15900 api_server.go:131] duration metric: took 6.545915ms to wait for apiserver health ...
	I1020 11:57:37.400449   15900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 11:57:37.404906   15900 system_pods.go:59] 20 kube-system pods found
	I1020 11:57:37.404939   15900 system_pods.go:61] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:37.404947   15900 system_pods.go:61] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:37.404958   15900 system_pods.go:61] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:37.404964   15900 system_pods.go:61] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:37.404970   15900 system_pods.go:61] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:37.404987   15900 system_pods.go:61] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:37.404991   15900 system_pods.go:61] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:37.404995   15900 system_pods.go:61] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:37.404998   15900 system_pods.go:61] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:37.405003   15900 system_pods.go:61] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:37.405007   15900 system_pods.go:61] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:37.405014   15900 system_pods.go:61] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:37.405019   15900 system_pods.go:61] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:37.405028   15900 system_pods.go:61] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:37.405034   15900 system_pods.go:61] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:37.405040   15900 system_pods.go:61] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:37.405044   15900 system_pods.go:61] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:37.405051   15900 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.405059   15900 system_pods.go:61] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.405071   15900 system_pods.go:61] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:37.405087   15900 system_pods.go:74] duration metric: took 4.631925ms to wait for pod list to return data ...
	I1020 11:57:37.405096   15900 default_sa.go:34] waiting for default service account to be created ...
	I1020 11:57:37.407587   15900 default_sa.go:45] found service account: "default"
	I1020 11:57:37.407611   15900 default_sa.go:55] duration metric: took 2.508057ms for default service account to be created ...
	I1020 11:57:37.407621   15900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 11:57:37.505884   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:37.505914   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:37.505921   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:37.505928   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:37.505933   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:37.505939   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:37.505943   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:37.505951   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:37.505954   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:37.505958   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:37.505962   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:37.505968   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:37.505972   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:37.505977   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:37.505985   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:37.505993   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:37.506000   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:37.506007   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:37.506012   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.506020   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.506025   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:37.506042   15900 retry.go:31] will retry after 266.264055ms: missing components: kube-dns
	I1020 11:57:37.545394   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:37.691693   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:37.709866   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:37.709904   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:37.794565   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:37.794604   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:37.794615   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:37.794624   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:37.794643   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:37.794651   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:37.794656   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:37.794663   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:37.794668   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:37.794673   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:37.794684   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:37.794689   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:37.794697   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:37.794704   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:37.794710   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:37.794718   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:37.794755   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:37.794764   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:37.794783   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.794792   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.794800   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:37.794817   15900 retry.go:31] will retry after 253.82825ms: missing components: kube-dns
	I1020 11:57:38.045134   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:38.054965   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:38.055008   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:38.055020   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:38.055031   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:38.055038   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:38.055046   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:38.055051   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:38.055057   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:38.055062   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:38.055067   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:38.055086   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:38.055092   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:38.055098   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:38.055113   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:38.055122   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:38.055130   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:38.055138   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:38.055148   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:38.055156   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.055165   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.055172   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:38.055188   15900 retry.go:31] will retry after 360.959257ms: missing components: kube-dns
	I1020 11:57:38.191039   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:38.210309   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:38.210383   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:38.421546   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:38.421588   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:38.421597   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Running
	I1020 11:57:38.421607   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:38.421615   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:38.421625   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:38.421631   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:38.421637   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:38.421642   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:38.421647   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:38.421655   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:38.421662   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:38.421667   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:38.421684   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:38.421692   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:38.421699   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:38.421707   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:38.421714   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:38.421722   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.421732   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.421737   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Running
	I1020 11:57:38.421751   15900 system_pods.go:126] duration metric: took 1.014122806s to wait for k8s-apps to be running ...
	I1020 11:57:38.421762   15900 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 11:57:38.421825   15900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 11:57:38.439493   15900 system_svc.go:56] duration metric: took 17.722957ms WaitForService to wait for kubelet
	I1020 11:57:38.439523   15900 kubeadm.go:586] duration metric: took 42.231622759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 11:57:38.439545   15900 node_conditions.go:102] verifying NodePressure condition ...
	I1020 11:57:38.442832   15900 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 11:57:38.442864   15900 node_conditions.go:123] node cpu capacity is 8
	I1020 11:57:38.442880   15900 node_conditions.go:105] duration metric: took 3.330031ms to run NodePressure ...
	I1020 11:57:38.442898   15900 start.go:241] waiting for startup goroutines ...
	I1020 11:57:38.544694   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:38.690203   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:38.710055   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:38.710157   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:39.045597   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:39.190862   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:39.211512   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:39.211661   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:39.544923   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:39.689577   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:39.709087   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:39.709160   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:40.045133   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:40.189963   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:40.209527   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:40.209629   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:40.544648   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:40.690242   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:40.709195   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:40.709241   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:41.045826   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:41.190115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:41.210666   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:41.210749   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:41.585743   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:41.689489   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:41.708868   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:41.709000   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:42.044980   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:42.190057   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:42.209800   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:42.209843   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:42.544565   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:42.690933   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:42.709853   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:42.709911   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:42.824203   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:43.045055   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:43.190277   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:43.209967   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:43.209988   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1020 11:57:43.419595   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:43.419628   15900 retry.go:31] will retry after 24.456993091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:43.545905   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:43.690145   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:43.709762   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:43.709882   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:44.045322   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:44.190662   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:44.210947   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:44.211089   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:44.545074   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:44.690115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:44.790429   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:44.790649   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:45.045219   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:45.190586   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:45.209308   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:45.209530   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:45.574992   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:45.732808   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:45.732868   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:45.732912   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:46.044263   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:46.189863   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:46.209338   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:46.209395   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:46.544982   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:46.690016   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:46.709336   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:46.709538   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:47.045269   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:47.190607   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:47.209739   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:47.210057   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:47.545657   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:47.689801   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:47.709467   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:47.709503   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:48.045714   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:48.190151   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:48.210336   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:48.211691   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:48.544786   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:48.690121   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:48.709895   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:48.709919   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:49.045364   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:49.191497   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:49.209083   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:49.209341   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:49.544855   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:49.690448   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:49.710445   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:49.710456   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:50.045257   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:50.190527   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:50.209222   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:50.209417   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:50.545746   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:50.689921   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:50.709589   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:50.709711   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:51.045328   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:51.191117   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:51.209818   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:51.210039   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:51.545731   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:51.690026   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:51.753708   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:51.753873   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:52.045222   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:52.190410   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:52.210075   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:52.210123   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:52.545194   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:52.690440   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:52.790710   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:52.790864   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:53.044660   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:53.189625   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:53.209456   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:53.209505   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:53.545519   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:53.689682   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:53.709060   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:53.709197   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:54.045731   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:54.190243   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:54.211264   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:54.212242   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:54.547842   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:54.691283   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:54.710831   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:54.711283   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:55.045467   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:55.190492   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:55.210252   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:55.210599   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:55.664932   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:55.768247   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:55.768362   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:55.768513   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:56.046439   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:56.189951   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:56.210083   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:56.210177   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:56.545473   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:56.690869   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:56.710256   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:56.710292   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:57.044972   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:57.190359   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:57.210457   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:57.210512   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:57.545276   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:57.690604   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:57.709718   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:57.709754   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:58.044726   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:58.189676   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:58.211995   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:58.212034   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:58.544543   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:58.690115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:58.709979   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:58.710208   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:59.045021   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:59.189928   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:59.209490   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:59.209532   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:59.544498   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:59.690807   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:59.709429   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:59.709582   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:00.045815   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:00.189902   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:00.209095   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:00.209229   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:00.545089   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:00.689957   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:00.709676   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:00.709765   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:01.045832   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:01.190162   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:01.211025   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:01.211134   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:01.545415   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:01.690224   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:01.791529   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:01.791650   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:02.044732   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:02.189836   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:02.209386   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:02.209471   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:02.545259   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:02.691055   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:02.709751   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:02.709927   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:03.044603   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:03.190948   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:03.210085   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:03.210150   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:03.545145   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:03.690564   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:03.709451   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:03.709506   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:04.044722   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:04.190104   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:04.209572   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:04.209641   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:04.544581   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:04.690430   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:04.708823   15900 kapi.go:107] duration metric: took 1m7.002562578s to wait for kubernetes.io/minikube-addons=registry ...
	I1020 11:58:04.708992   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:05.045033   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:05.190646   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:05.210559   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:05.553594   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:05.692239   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:05.741596   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:06.045750   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:06.190143   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:06.210366   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:06.545662   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:06.690292   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:06.709630   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:07.045306   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:07.190514   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:07.209150   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:07.544739   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:07.690687   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:07.709370   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:07.877422   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:08.044803   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:08.189847   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:08.209805   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:08.544244   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:58:08.578229   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:08.578263   15900 retry.go:31] will retry after 32.122770018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:08.690706   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:08.709401   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:09.045120   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:09.189811   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:09.209158   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:09.545294   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:09.690146   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:09.709905   15900 kapi.go:107] duration metric: took 1m12.00368565s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1020 11:58:10.044469   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:10.191070   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:10.546608   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:10.690536   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:11.044926   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:11.190036   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:11.545576   15900 kapi.go:107] duration metric: took 1m7.003927915s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1020 11:58:11.579682   15900 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-053741 cluster.
	I1020 11:58:11.592494   15900 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1020 11:58:11.666184   15900 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1020 11:58:11.690120   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:12.191542   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:12.690410   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:13.189946   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:13.690895   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:14.191117   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:14.690840   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:15.190683   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:15.690650   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:16.190478   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:16.690007   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:17.190490   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:17.690095   15900 kapi.go:107] duration metric: took 1m19.50347858s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1020 11:58:40.703166   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1020 11:58:41.233430   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1020 11:58:41.233547   15900 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1020 11:58:41.235358   15900 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, registry-creds, cloud-spanner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1020 11:58:41.236604   15900 addons.go:514] duration metric: took 1m45.028690198s for enable addons: enabled=[ingress-dns nvidia-device-plugin amd-gpu-device-plugin storage-provisioner registry-creds cloud-spanner default-storageclass metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1020 11:58:41.236647   15900 start.go:246] waiting for cluster config update ...
	I1020 11:58:41.236670   15900 start.go:255] writing updated cluster config ...
	I1020 11:58:41.236937   15900 ssh_runner.go:195] Run: rm -f paused
	I1020 11:58:41.240751   15900 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 11:58:41.244017   15900 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ml6gb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.247687   15900 pod_ready.go:94] pod "coredns-66bc5c9577-ml6gb" is "Ready"
	I1020 11:58:41.247706   15900 pod_ready.go:86] duration metric: took 3.670368ms for pod "coredns-66bc5c9577-ml6gb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.249396   15900 pod_ready.go:83] waiting for pod "etcd-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.252834   15900 pod_ready.go:94] pod "etcd-addons-053741" is "Ready"
	I1020 11:58:41.252858   15900 pod_ready.go:86] duration metric: took 3.444426ms for pod "etcd-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.254584   15900 pod_ready.go:83] waiting for pod "kube-apiserver-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.257917   15900 pod_ready.go:94] pod "kube-apiserver-addons-053741" is "Ready"
	I1020 11:58:41.257940   15900 pod_ready.go:86] duration metric: took 3.337517ms for pod "kube-apiserver-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.259549   15900 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.644244   15900 pod_ready.go:94] pod "kube-controller-manager-addons-053741" is "Ready"
	I1020 11:58:41.644271   15900 pod_ready.go:86] duration metric: took 384.706077ms for pod "kube-controller-manager-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.844411   15900 pod_ready.go:83] waiting for pod "kube-proxy-f9l25" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.243951   15900 pod_ready.go:94] pod "kube-proxy-f9l25" is "Ready"
	I1020 11:58:42.243979   15900 pod_ready.go:86] duration metric: took 399.541143ms for pod "kube-proxy-f9l25" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.445467   15900 pod_ready.go:83] waiting for pod "kube-scheduler-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.844731   15900 pod_ready.go:94] pod "kube-scheduler-addons-053741" is "Ready"
	I1020 11:58:42.844756   15900 pod_ready.go:86] duration metric: took 399.262918ms for pod "kube-scheduler-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.844766   15900 pod_ready.go:40] duration metric: took 1.603974009s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 11:58:42.888734   15900 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 11:58:42.890440   15900 out.go:179] * Done! kubectl is now configured to use "addons-053741" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:01:26 addons-053741 crio[771]: time="2025-10-20T12:01:26.982591868Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-t6trw/POD" id=4de08768-cdc0-4aed-badc-a11585f96fff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:01:26 addons-053741 crio[771]: time="2025-10-20T12:01:26.982697746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:01:26 addons-053741 crio[771]: time="2025-10-20T12:01:26.988985791Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-t6trw Namespace:default ID:9fe3912bef4e6c58f428503b31615341f699b8cfdde3a59733310bb1599deb5c UID:cb2f29e9-295e-496f-8320-5cc7e8fe1546 NetNS:/var/run/netns/c154569f-8516-463d-91b1-da06e272e5c3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ad2700}] Aliases:map[]}"
	Oct 20 12:01:26 addons-053741 crio[771]: time="2025-10-20T12:01:26.989020808Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-t6trw to CNI network \"kindnet\" (type=ptp)"
	Oct 20 12:01:26 addons-053741 crio[771]: time="2025-10-20T12:01:26.999400393Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-t6trw Namespace:default ID:9fe3912bef4e6c58f428503b31615341f699b8cfdde3a59733310bb1599deb5c UID:cb2f29e9-295e-496f-8320-5cc7e8fe1546 NetNS:/var/run/netns/c154569f-8516-463d-91b1-da06e272e5c3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ad2700}] Aliases:map[]}"
	Oct 20 12:01:26 addons-053741 crio[771]: time="2025-10-20T12:01:26.999527034Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-t6trw for CNI network kindnet (type=ptp)"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.000398551Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.001173698Z" level=info msg="Ran pod sandbox 9fe3912bef4e6c58f428503b31615341f699b8cfdde3a59733310bb1599deb5c with infra container: default/hello-world-app-5d498dc89-t6trw/POD" id=4de08768-cdc0-4aed-badc-a11585f96fff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.002411809Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7ec3bc8c-6905-4111-93e9-1c2d29b12720 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.002525878Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7ec3bc8c-6905-4111-93e9-1c2d29b12720 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.002561848Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7ec3bc8c-6905-4111-93e9-1c2d29b12720 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.003212285Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=56521d95-335e-4992-adaa-207f17ac7482 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.011411269Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.378028224Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=56521d95-335e-4992-adaa-207f17ac7482 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.37869112Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a218b0a5-d9e4-42ad-97ac-4c6383800664 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.380099493Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c43ad76a-e447-4c34-bc65-468381670fa8 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.38369903Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-t6trw/hello-world-app" id=f91595c7-d81d-4a20-9bc8-f391c850c4d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.383854027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.389095156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.389256115Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ec934e75bbddc881515d60e91c2ef88c7533f97dd74a710fb2f5efeac1905d24/merged/etc/passwd: no such file or directory"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.389280869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ec934e75bbddc881515d60e91c2ef88c7533f97dd74a710fb2f5efeac1905d24/merged/etc/group: no such file or directory"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.389483414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.419207337Z" level=info msg="Created container ab275d5578a3e81b4d5c4535641bd8c09d069724991432a5bdf61d040b962939: default/hello-world-app-5d498dc89-t6trw/hello-world-app" id=f91595c7-d81d-4a20-9bc8-f391c850c4d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.419894149Z" level=info msg="Starting container: ab275d5578a3e81b4d5c4535641bd8c09d069724991432a5bdf61d040b962939" id=7dff4260-7415-4633-bf7a-8bb10bebb403 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:01:27 addons-053741 crio[771]: time="2025-10-20T12:01:27.421743836Z" level=info msg="Started container" PID=9988 containerID=ab275d5578a3e81b4d5c4535641bd8c09d069724991432a5bdf61d040b962939 description=default/hello-world-app-5d498dc89-t6trw/hello-world-app id=7dff4260-7415-4633-bf7a-8bb10bebb403 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fe3912bef4e6c58f428503b31615341f699b8cfdde3a59733310bb1599deb5c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ab275d5578a3e       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   9fe3912bef4e6       hello-world-app-5d498dc89-t6trw             default
	5c56913440165       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   3c4de268da3e2       registry-creds-764b6fb674-6kcjl             kube-system
	733fbeb84b596       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   cf1fb176fd52c       nginx                                       default
	d9d7c36ffbac9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   8de0161b1dba1       busybox                                     default
	6df2005c3dce4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	02ac8e9a477c9       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	2d3daf84e6c96       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	dd4bb1b4f7046       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	2edf50baac6ba       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   7bc7d0b7faab4       gadget-bb9nf                                gadget
	6b0cf0f679a40       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	7a2686ee3a166       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   c2a8ddbe16025       gcp-auth-78565c9fb4-6zzdw                   gcp-auth
	9a70e76cd0e96       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   39eac987f8194       ingress-nginx-controller-675c5ddd98-wwnpt   ingress-nginx
	c4b5fa9dcee14       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   6ab594407f7bf       amd-gpu-device-plugin-pcd5k                 kube-system
	b5d282533aea8       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   f88b926aa61e0       csi-hostpath-resizer-0                      kube-system
	c6bc622719c6a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   c8608a162b543       registry-proxy-wfdh9                        kube-system
	570ba942e1d25       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago            Exited              patch                                    2                   dc339a9258caa       ingress-nginx-admission-patch-4krq9         ingress-nginx
	28a9df06a407b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	8ee09292e70de       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   42795724b8454       nvidia-device-plugin-daemonset-p47g8        kube-system
	cb34c9f1c580c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   7c8d52ca183a2       snapshot-controller-7d9fbc56b8-2ztzp        kube-system
	51a80cd6bc076       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   6b128ea89ebfa       snapshot-controller-7d9fbc56b8-stswk        kube-system
	d9a30b9299a6e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   ea0b994c8b79f       csi-hostpath-attacher-0                     kube-system
	9fd6582d19e66       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   1f2900ef06fbe       yakd-dashboard-5ff678cb9-npcnf              yakd-dashboard
	fbcca8cc89164       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   695787961d8e9       ingress-nginx-admission-create-jbfb9        ingress-nginx
	360ec23af69c7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   23e34dd134d63       local-path-provisioner-648f6765c9-ndz4w     local-path-storage
	9370bc1dd29d3       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   92c6654f1a567       registry-6b586f9694-gb2mv                   kube-system
	307bd8f9af404       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   3f13cf3aca3d2       cloud-spanner-emulator-86bd5cbb97-xcpnk     default
	fa80ac0b9cd9c       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   94166c3150192       metrics-server-85b7d694d7-5b2cn             kube-system
	67371a5015804       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   35e61012c6c5a       kube-ingress-dns-minikube                   kube-system
	0f15b4706c771       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   3afdcb527b7a6       coredns-66bc5c9577-ml6gb                    kube-system
	b5c7f9c4b30eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   4f3dd3c7dfc62       storage-provisioner                         kube-system
	52948a7351d92       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   175ebef6509ca       kindnet-5mww7                               kube-system
	daef0b8bb4e24       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   31ade7fbe9491       kube-proxy-f9l25                            kube-system
	3638400d972a3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   2680c662916a1       kube-controller-manager-addons-053741       kube-system
	fac7a84a8cd03       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   5224b616bc140       kube-apiserver-addons-053741                kube-system
	a165b7f5e69ec       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   f806d78362563       kube-scheduler-addons-053741                kube-system
	d6564015bbe91       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   7c40925ee9a35       etcd-addons-053741                          kube-system
	
	
	==> coredns [0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a] <==
	[INFO] 10.244.0.21:58230 - 22650 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00668097s
	[INFO] 10.244.0.21:60142 - 15436 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004508968s
	[INFO] 10.244.0.21:46524 - 36626 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007043292s
	[INFO] 10.244.0.21:48679 - 11884 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004402343s
	[INFO] 10.244.0.21:58088 - 62121 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005312713s
	[INFO] 10.244.0.21:33011 - 23461 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000873015s
	[INFO] 10.244.0.21:52797 - 42213 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001245242s
	[INFO] 10.244.0.24:46181 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000266147s
	[INFO] 10.244.0.24:53387 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000179393s
	[INFO] 10.244.0.31:34570 - 16759 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000199247s
	[INFO] 10.244.0.31:33817 - 51617 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00026594s
	[INFO] 10.244.0.31:35337 - 32249 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000142333s
	[INFO] 10.244.0.31:35614 - 9581 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000194975s
	[INFO] 10.244.0.31:42969 - 47682 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000123925s
	[INFO] 10.244.0.31:51583 - 59622 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000157591s
	[INFO] 10.244.0.31:60506 - 9471 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003353488s
	[INFO] 10.244.0.31:32931 - 39774 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004296298s
	[INFO] 10.244.0.31:56427 - 42237 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005927146s
	[INFO] 10.244.0.31:56135 - 41042 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006838223s
	[INFO] 10.244.0.31:53446 - 7326 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004827619s
	[INFO] 10.244.0.31:39932 - 53654 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005325883s
	[INFO] 10.244.0.31:39461 - 63054 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005188354s
	[INFO] 10.244.0.31:40262 - 26025 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005601477s
	[INFO] 10.244.0.31:55260 - 18482 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001733708s
	[INFO] 10.244.0.31:38306 - 29834 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002472954s
	
	
	==> describe nodes <==
	Name:               addons-053741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-053741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=addons-053741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T11_56_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-053741
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-053741"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 11:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-053741
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:01:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:01:25 +0000   Mon, 20 Oct 2025 11:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:01:25 +0000   Mon, 20 Oct 2025 11:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:01:25 +0000   Mon, 20 Oct 2025 11:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:01:25 +0000   Mon, 20 Oct 2025 11:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-053741
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                14a15a42-128d-4aa1-9f59-56e441c974e3
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  default                     cloud-spanner-emulator-86bd5cbb97-xcpnk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  default                     hello-world-app-5d498dc89-t6trw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-bb9nf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  gcp-auth                    gcp-auth-78565c9fb4-6zzdw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-wwnpt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m31s
	  kube-system                 amd-gpu-device-plugin-pcd5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-66bc5c9577-ml6gb                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m32s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 csi-hostpathplugin-2k9f8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-addons-053741                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m38s
	  kube-system                 kindnet-5mww7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m33s
	  kube-system                 kube-apiserver-addons-053741                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-addons-053741        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-f9l25                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-scheduler-addons-053741                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 metrics-server-85b7d694d7-5b2cn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m31s
	  kube-system                 nvidia-device-plugin-daemonset-p47g8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 registry-6b586f9694-gb2mv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 registry-creds-764b6fb674-6kcjl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 registry-proxy-wfdh9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 snapshot-controller-7d9fbc56b8-2ztzp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 snapshot-controller-7d9fbc56b8-stswk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  local-path-storage          local-path-provisioner-648f6765c9-ndz4w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-npcnf               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m32s  kube-proxy       
	  Normal  Starting                 4m38s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s  kubelet          Node addons-053741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s  kubelet          Node addons-053741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s  kubelet          Node addons-053741 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m34s  node-controller  Node addons-053741 event: Registered Node addons-053741 in Controller
	  Normal  NodeReady                3m52s  kubelet          Node addons-053741 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5] <==
	{"level":"warn","ts":"2025-10-20T11:56:47.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.285604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.291650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.299657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.306808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.313012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.320643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.341007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.348032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.354567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.400814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:58.761849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:58.768884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.838348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.844618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.862175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.868483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:45.573103Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.040079ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040758640199966 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" mod_revision:964 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" value_size:2180 >> failure:<request_range:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T11:57:45.573229Z","caller":"traceutil/trace.go:172","msg":"trace[635650607] transaction","detail":"{read_only:false; response_revision:966; number_of_response:1; }","duration":"218.297006ms","start":"2025-10-20T11:57:45.354914Z","end":"2025-10-20T11:57:45.573211Z","steps":["trace[635650607] 'process raft request'  (duration: 93.598699ms)","trace[635650607] 'compare'  (duration: 123.95377ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T11:57:55.468836Z","caller":"traceutil/trace.go:172","msg":"trace[930065237] transaction","detail":"{read_only:false; response_revision:1067; number_of_response:1; }","duration":"123.362794ms","start":"2025-10-20T11:57:55.345451Z","end":"2025-10-20T11:57:55.468814Z","steps":["trace[930065237] 'process raft request'  (duration: 123.223403ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T11:57:55.662219Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.2383ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:57:55.662291Z","caller":"traceutil/trace.go:172","msg":"trace[1496454679] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1067; }","duration":"121.320219ms","start":"2025-10-20T11:57:55.540955Z","end":"2025-10-20T11:57:55.662275Z","steps":["trace[1496454679] 'agreement among raft nodes before linearized reading'  (duration: 40.055541ms)","trace[1496454679] 'range keys from in-memory index tree'  (duration: 81.162988ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T11:57:55.662405Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.239247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:57:55.662469Z","caller":"traceutil/trace.go:172","msg":"trace[2060499466] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1068; }","duration":"119.306393ms","start":"2025-10-20T11:57:55.543150Z","end":"2025-10-20T11:57:55.662456Z","steps":["trace[2060499466] 'agreement among raft nodes before linearized reading'  (duration: 119.21031ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T11:57:55.662492Z","caller":"traceutil/trace.go:172","msg":"trace[1136934314] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"185.85343ms","start":"2025-10-20T11:57:55.476626Z","end":"2025-10-20T11:57:55.662480Z","steps":["trace[1136934314] 'process raft request'  (duration: 104.409865ms)","trace[1136934314] 'compare'  (duration: 81.136314ms)"],"step_count":2}
	
	
	==> gcp-auth [7a2686ee3a16603af0133e7b3765feeb36a76327a0c538482061c10fd4656b6b] <==
	2025/10/20 11:58:10 GCP Auth Webhook started!
	2025/10/20 11:58:43 Ready to marshal response ...
	2025/10/20 11:58:43 Ready to write response ...
	2025/10/20 11:58:43 Ready to marshal response ...
	2025/10/20 11:58:43 Ready to write response ...
	2025/10/20 11:58:43 Ready to marshal response ...
	2025/10/20 11:58:43 Ready to write response ...
	2025/10/20 11:59:03 Ready to marshal response ...
	2025/10/20 11:59:03 Ready to write response ...
	2025/10/20 11:59:03 Ready to marshal response ...
	2025/10/20 11:59:03 Ready to write response ...
	2025/10/20 11:59:03 Ready to marshal response ...
	2025/10/20 11:59:03 Ready to write response ...
	2025/10/20 11:59:03 Ready to marshal response ...
	2025/10/20 11:59:03 Ready to write response ...
	2025/10/20 11:59:08 Ready to marshal response ...
	2025/10/20 11:59:08 Ready to write response ...
	2025/10/20 11:59:10 Ready to marshal response ...
	2025/10/20 11:59:10 Ready to write response ...
	2025/10/20 11:59:26 Ready to marshal response ...
	2025/10/20 11:59:26 Ready to write response ...
	2025/10/20 12:01:26 Ready to marshal response ...
	2025/10/20 12:01:26 Ready to write response ...
	
	
	==> kernel <==
	 12:01:28 up 43 min,  0 user,  load average: 0.34, 0.89, 0.49
	Linux addons-053741 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238] <==
	I1020 11:59:26.582137       1 main.go:301] handling current node
	I1020 11:59:36.582375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:59:36.582414       1 main.go:301] handling current node
	I1020 11:59:46.585709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:59:46.585746       1 main.go:301] handling current node
	I1020 11:59:56.589863       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:59:56.589892       1 main.go:301] handling current node
	I1020 12:00:06.583008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:00:06.583037       1 main.go:301] handling current node
	I1020 12:00:16.582662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:00:16.582706       1 main.go:301] handling current node
	I1020 12:00:26.582889       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:00:26.582922       1 main.go:301] handling current node
	I1020 12:00:36.587401       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:00:36.587431       1 main.go:301] handling current node
	I1020 12:00:46.584655       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:00:46.584691       1 main.go:301] handling current node
	I1020 12:00:56.582281       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:00:56.582316       1 main.go:301] handling current node
	I1020 12:01:06.587968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:01:06.587996       1 main.go:301] handling current node
	I1020 12:01:16.582810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:01:16.582861       1 main.go:301] handling current node
	I1020 12:01:26.583113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:01:26.583142       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef] <==
	W1020 11:57:24.838219       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:24.844646       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:24.862062       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:24.868430       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:36.932036       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.932080       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:36.932197       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.932229       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:36.955319       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.955421       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:36.957330       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.957368       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	E1020 11:57:45.341343       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.109.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.109.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.109.199:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:45.341430       1 handler_proxy.go:99] no RequestInfo found in the context
	E1020 11:57:45.341484       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1020 11:57:45.354466       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1020 11:58:52.556674       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36796: use of closed network connection
	E1020 11:58:52.705512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36826: use of closed network connection
	I1020 11:59:03.764394       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1020 11:59:03.945511       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.238.69"}
	I1020 11:59:17.809328       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1020 12:01:26.748481       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.101.169"}
	
	
	==> kube-controller-manager [3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b] <==
	I1020 11:56:54.824144       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 11:56:54.824217       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 11:56:54.824240       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 11:56:54.824300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 11:56:54.824346       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 11:56:54.825381       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 11:56:54.825432       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 11:56:54.825519       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 11:56:54.825532       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 11:56:54.828102       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 11:56:54.828156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 11:56:54.828250       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 11:56:54.828267       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 11:56:54.844852       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 11:56:54.844861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 11:56:54.844889       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 11:56:54.844899       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1020 11:57:24.832728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1020 11:57:24.832879       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1020 11:57:24.832912       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1020 11:57:24.853733       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1020 11:57:24.856924       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1020 11:57:24.933517       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 11:57:24.957957       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 11:57:39.780928       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4] <==
	I1020 11:56:56.083297       1 server_linux.go:53] "Using iptables proxy"
	I1020 11:56:56.139284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 11:56:56.242788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 11:56:56.242833       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 11:56:56.242950       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 11:56:56.315892       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 11:56:56.315949       1 server_linux.go:132] "Using iptables Proxier"
	I1020 11:56:56.342287       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 11:56:56.348572       1 server.go:527] "Version info" version="v1.34.1"
	I1020 11:56:56.348670       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 11:56:56.350194       1 config.go:200] "Starting service config controller"
	I1020 11:56:56.350271       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 11:56:56.350376       1 config.go:309] "Starting node config controller"
	I1020 11:56:56.351116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 11:56:56.351183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 11:56:56.350972       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 11:56:56.351237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 11:56:56.350960       1 config.go:106] "Starting endpoint slice config controller"
	I1020 11:56:56.351296       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 11:56:56.456284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 11:56:56.461683       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 11:56:56.462323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b] <==
	E1020 11:56:47.969436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 11:56:47.969500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 11:56:47.969528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 11:56:47.969571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 11:56:47.969578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 11:56:47.969595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 11:56:47.969635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 11:56:47.969694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 11:56:47.969703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 11:56:47.969709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 11:56:47.969747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 11:56:47.970438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 11:56:47.970482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 11:56:47.970517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 11:56:47.970662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 11:56:47.970953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 11:56:47.971018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 11:56:48.777445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 11:56:48.789508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 11:56:48.923837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 11:56:48.969717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 11:56:48.992751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 11:56:49.045716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 11:56:49.193051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1020 11:56:51.767983       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.445671    1310 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^38fc855c-adac-11f0-b2ca-921487b0d11a\") pod \"d9d78008-a790-4143-a126-ea2505e1f669\" (UID: \"d9d78008-a790-4143-a126-ea2505e1f669\") "
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.445852    1310 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9d78008-a790-4143-a126-ea2505e1f669-gcp-creds\") on node \"addons-053741\" DevicePath \"\""
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.448085    1310 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d78008-a790-4143-a126-ea2505e1f669-kube-api-access-7gpqv" (OuterVolumeSpecName: "kube-api-access-7gpqv") pod "d9d78008-a790-4143-a126-ea2505e1f669" (UID: "d9d78008-a790-4143-a126-ea2505e1f669"). InnerVolumeSpecName "kube-api-access-7gpqv". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.448692    1310 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^38fc855c-adac-11f0-b2ca-921487b0d11a" (OuterVolumeSpecName: "task-pv-storage") pod "d9d78008-a790-4143-a126-ea2505e1f669" (UID: "d9d78008-a790-4143-a126-ea2505e1f669"). InnerVolumeSpecName "pvc-f8c14145-d376-4d97-bef9-8a4759b5176b". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.546443    1310 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f8c14145-d376-4d97-bef9-8a4759b5176b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^38fc855c-adac-11f0-b2ca-921487b0d11a\") on node \"addons-053741\" "
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.546475    1310 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7gpqv\" (UniqueName: \"kubernetes.io/projected/d9d78008-a790-4143-a126-ea2505e1f669-kube-api-access-7gpqv\") on node \"addons-053741\" DevicePath \"\""
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.551477    1310 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f8c14145-d376-4d97-bef9-8a4759b5176b" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^38fc855c-adac-11f0-b2ca-921487b0d11a") on node "addons-053741"
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.647299    1310 reconciler_common.go:299] "Volume detached for volume \"pvc-f8c14145-d376-4d97-bef9-8a4759b5176b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^38fc855c-adac-11f0-b2ca-921487b0d11a\") on node \"addons-053741\" DevicePath \"\""
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.733807    1310 scope.go:117] "RemoveContainer" containerID="bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17"
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.744629    1310 scope.go:117] "RemoveContainer" containerID="bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17"
	Oct 20 11:59:33 addons-053741 kubelet[1310]: E1020 11:59:33.745180    1310 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17\": container with ID starting with bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17 not found: ID does not exist" containerID="bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17"
	Oct 20 11:59:33 addons-053741 kubelet[1310]: I1020 11:59:33.745224    1310 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17"} err="failed to get container status \"bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17\": rpc error: code = NotFound desc = could not find container \"bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17\": container with ID starting with bbb021c24c65d8f7345f94361995000b2e37317120ddf656b1097192a4dd0f17 not found: ID does not exist"
	Oct 20 11:59:34 addons-053741 kubelet[1310]: I1020 11:59:34.074846    1310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d78008-a790-4143-a126-ea2505e1f669" path="/var/lib/kubelet/pods/d9d78008-a790-4143-a126-ea2505e1f669/volumes"
	Oct 20 11:59:39 addons-053741 kubelet[1310]: E1020 11:59:39.941049    1310 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-6kcjl" podUID="9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3"
	Oct 20 11:59:50 addons-053741 kubelet[1310]: I1020 11:59:50.094954    1310 scope.go:117] "RemoveContainer" containerID="2fdadd99c2999233c609b824d24eced11cbea1ecc26ca12e64c1bae4bfb5261e"
	Oct 20 11:59:50 addons-053741 kubelet[1310]: I1020 11:59:50.102903    1310 scope.go:117] "RemoveContainer" containerID="5072f2801798059228f9f18190ba50d870e4b34b1973bbde3b9d6c37539e87f1"
	Oct 20 11:59:50 addons-053741 kubelet[1310]: I1020 11:59:50.110328    1310 scope.go:117] "RemoveContainer" containerID="e0eb75f0dd743a56c6fbfbab85977e242c2eecf88dce65abd8eef787f9fb0e0c"
	Oct 20 11:59:50 addons-053741 kubelet[1310]: I1020 11:59:50.119642    1310 scope.go:117] "RemoveContainer" containerID="90a35da5595de79428181b6624e6c5e17fa56c46ace6112e29edde444c4957fb"
	Oct 20 11:59:56 addons-053741 kubelet[1310]: I1020 11:59:56.835821    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-6kcjl" podStartSLOduration=180.18399816 podStartE2EDuration="3m0.835803179s" podCreationTimestamp="2025-10-20 11:56:56 +0000 UTC" firstStartedPulling="2025-10-20 11:59:55.095402052 +0000 UTC m=+185.103381062" lastFinishedPulling="2025-10-20 11:59:55.747207049 +0000 UTC m=+185.755186081" observedRunningTime="2025-10-20 11:59:56.8342589 +0000 UTC m=+186.842237928" watchObservedRunningTime="2025-10-20 11:59:56.835803179 +0000 UTC m=+186.843782205"
	Oct 20 12:00:23 addons-053741 kubelet[1310]: I1020 12:00:23.072322    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pcd5k" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:00:40 addons-053741 kubelet[1310]: I1020 12:00:40.073298    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wfdh9" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:00:41 addons-053741 kubelet[1310]: I1020 12:00:41.072253    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-p47g8" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:01:26 addons-053741 kubelet[1310]: I1020 12:01:26.753407    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6dn4\" (UniqueName: \"kubernetes.io/projected/cb2f29e9-295e-496f-8320-5cc7e8fe1546-kube-api-access-h6dn4\") pod \"hello-world-app-5d498dc89-t6trw\" (UID: \"cb2f29e9-295e-496f-8320-5cc7e8fe1546\") " pod="default/hello-world-app-5d498dc89-t6trw"
	Oct 20 12:01:26 addons-053741 kubelet[1310]: I1020 12:01:26.753473    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cb2f29e9-295e-496f-8320-5cc7e8fe1546-gcp-creds\") pod \"hello-world-app-5d498dc89-t6trw\" (UID: \"cb2f29e9-295e-496f-8320-5cc7e8fe1546\") " pod="default/hello-world-app-5d498dc89-t6trw"
	Oct 20 12:01:28 addons-053741 kubelet[1310]: I1020 12:01:28.160905    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-t6trw" podStartSLOduration=1.784243246 podStartE2EDuration="2.160884298s" podCreationTimestamp="2025-10-20 12:01:26 +0000 UTC" firstStartedPulling="2025-10-20 12:01:27.002841722 +0000 UTC m=+277.010820740" lastFinishedPulling="2025-10-20 12:01:27.379482784 +0000 UTC m=+277.387461792" observedRunningTime="2025-10-20 12:01:28.159892679 +0000 UTC m=+278.167871707" watchObservedRunningTime="2025-10-20 12:01:28.160884298 +0000 UTC m=+278.168863328"
	
	
	==> storage-provisioner [b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04] <==
	W1020 12:01:04.535482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:06.538541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:06.543838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:08.547140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:08.550839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:10.553486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:10.558320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:12.561660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:12.567429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:14.576423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:14.581561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:16.584443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:16.588126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:18.591218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:18.595137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:20.598434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:20.603897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:22.607186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:22.611558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:24.614137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:24.619260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:26.621883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:26.626058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:28.629842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:01:28.633925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-053741 -n addons-053741
helpers_test.go:269: (dbg) Run:  kubectl --context addons-053741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-053741 describe pod ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-053741 describe pod ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9: exit status 1 (59.72816ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jbfb9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4krq9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-053741 describe pod ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (235.932014ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:01:29.356887   30723 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:01:29.357170   30723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:01:29.357178   30723 out.go:374] Setting ErrFile to fd 2...
	I1020 12:01:29.357182   30723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:01:29.357373   30723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:01:29.357627   30723 mustload.go:65] Loading cluster: addons-053741
	I1020 12:01:29.357933   30723 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:01:29.357946   30723 addons.go:606] checking whether the cluster is paused
	I1020 12:01:29.358017   30723 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:01:29.358035   30723 host.go:66] Checking if "addons-053741" exists ...
	I1020 12:01:29.358385   30723 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 12:01:29.377331   30723 ssh_runner.go:195] Run: systemctl --version
	I1020 12:01:29.377394   30723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 12:01:29.396672   30723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 12:01:29.496509   30723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:01:29.496594   30723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:01:29.526170   30723 cri.go:89] found id: "5c56913440165a40ee69799331793283a687edbe82c4560a12bd4f4774f4b55a"
	I1020 12:01:29.526194   30723 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 12:01:29.526199   30723 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 12:01:29.526203   30723 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 12:01:29.526208   30723 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 12:01:29.526213   30723 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 12:01:29.526217   30723 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 12:01:29.526221   30723 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 12:01:29.526225   30723 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 12:01:29.526233   30723 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 12:01:29.526237   30723 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 12:01:29.526241   30723 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 12:01:29.526245   30723 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 12:01:29.526255   30723 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 12:01:29.526260   30723 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 12:01:29.526354   30723 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 12:01:29.526378   30723 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 12:01:29.526385   30723 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 12:01:29.526390   30723 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 12:01:29.526394   30723 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 12:01:29.526398   30723 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 12:01:29.526402   30723 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 12:01:29.526409   30723 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 12:01:29.526413   30723 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 12:01:29.526421   30723 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 12:01:29.526425   30723 cri.go:89] found id: ""
	I1020 12:01:29.526477   30723 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:01:29.541051   30723 out.go:203] 
	W1020 12:01:29.542567   30723 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:01:29.542587   30723 out.go:285] * 
	* 
	W1020 12:01:29.545742   30723 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:01:29.547226   30723 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable ingress --alsologtostderr -v=1: exit status 11 (233.594549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:01:29.593439   30786 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:01:29.593754   30786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:01:29.593766   30786 out.go:374] Setting ErrFile to fd 2...
	I1020 12:01:29.593788   30786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:01:29.594000   30786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:01:29.594292   30786 mustload.go:65] Loading cluster: addons-053741
	I1020 12:01:29.594630   30786 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:01:29.594650   30786 addons.go:606] checking whether the cluster is paused
	I1020 12:01:29.594749   30786 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:01:29.594766   30786 host.go:66] Checking if "addons-053741" exists ...
	I1020 12:01:29.595190   30786 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 12:01:29.612941   30786 ssh_runner.go:195] Run: systemctl --version
	I1020 12:01:29.612997   30786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 12:01:29.630988   30786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 12:01:29.730375   30786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:01:29.730438   30786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:01:29.759736   30786 cri.go:89] found id: "5c56913440165a40ee69799331793283a687edbe82c4560a12bd4f4774f4b55a"
	I1020 12:01:29.759765   30786 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 12:01:29.759784   30786 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 12:01:29.759789   30786 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 12:01:29.759793   30786 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 12:01:29.759798   30786 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 12:01:29.759802   30786 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 12:01:29.759806   30786 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 12:01:29.759810   30786 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 12:01:29.759829   30786 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 12:01:29.759833   30786 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 12:01:29.759836   30786 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 12:01:29.759838   30786 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 12:01:29.759840   30786 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 12:01:29.759843   30786 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 12:01:29.759850   30786 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 12:01:29.759856   30786 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 12:01:29.759860   30786 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 12:01:29.759863   30786 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 12:01:29.759865   30786 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 12:01:29.759868   30786 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 12:01:29.759870   30786 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 12:01:29.759872   30786 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 12:01:29.759880   30786 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 12:01:29.759883   30786 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 12:01:29.759885   30786 cri.go:89] found id: ""
	I1020 12:01:29.759925   30786 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:01:29.775052   30786 out.go:203] 
	W1020 12:01:29.776506   30786 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:01:29.776525   30786 out.go:285] * 
	* 
	W1020 12:01:29.779606   30786 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:01:29.781029   30786 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bb9nf" [13e3f964-d33a-43bb-ad5c-2f8c0838c20a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005739296s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (249.254981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:03.237697   25696 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:03.237856   25696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:03.237865   25696 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:03.237869   25696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:03.238073   25696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:03.238347   25696 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:03.238676   25696 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:03.238691   25696 addons.go:606] checking whether the cluster is paused
	I1020 11:59:03.238767   25696 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:03.238810   25696 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:03.239209   25696 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:03.258476   25696 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:03.258543   25696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:03.280114   25696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:03.381306   25696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:03.381406   25696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:03.415924   25696 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:03.415981   25696 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:03.415987   25696 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:03.415989   25696 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:03.415992   25696 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:03.415996   25696 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:03.415999   25696 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:03.416001   25696 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:03.416004   25696 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:03.416017   25696 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:03.416023   25696 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:03.416025   25696 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:03.416028   25696 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:03.416031   25696 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:03.416033   25696 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:03.416037   25696 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:03.416040   25696 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:03.416045   25696 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:03.416047   25696 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:03.416049   25696 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:03.416054   25696 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:03.416056   25696 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:03.416058   25696 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:03.416060   25696 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:03.416068   25696 cri.go:89] found id: ""
	I1020 11:59:03.416118   25696 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:03.431286   25696 out.go:203] 
	W1020 11:59:03.432615   25696 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:03.432636   25696 out.go:285] * 
	* 
	W1020 11:59:03.435752   25696 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:03.437499   25696 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.344733ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003514881s
addons_test.go:463: (dbg) Run:  kubectl --context addons-053741 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (249.146966ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:03.301806   25729 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:03.302097   25729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:03.302109   25729 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:03.302114   25729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:03.302309   25729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:03.302571   25729 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:03.302969   25729 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:03.302999   25729 addons.go:606] checking whether the cluster is paused
	I1020 11:59:03.303094   25729 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:03.303108   25729 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:03.303493   25729 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:03.321635   25729 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:03.321710   25729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:03.340556   25729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:03.442216   25729 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:03.442289   25729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:03.477608   25729 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:03.477630   25729 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:03.477634   25729 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:03.477638   25729 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:03.477643   25729 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:03.477648   25729 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:03.477653   25729 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:03.477657   25729 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:03.477661   25729 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:03.477668   25729 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:03.477673   25729 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:03.477677   25729 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:03.477681   25729 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:03.477686   25729 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:03.477690   25729 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:03.477701   25729 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:03.477708   25729 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:03.477712   25729 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:03.477715   25729 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:03.477718   25729 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:03.477720   25729 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:03.477723   25729 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:03.477725   25729 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:03.477728   25729 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:03.477730   25729 cri.go:89] found id: ""
	I1020 11:59:03.477764   25729 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:03.492661   25729 out.go:203] 
	W1020 11:59:03.493942   25729 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:03.493967   25729 out.go:285] * 
	* 
	W1020 11:59:03.497993   25729 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:03.499331   25729 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1020 11:58:55.442230   14592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1020 11:58:55.445599   14592 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1020 11:58:55.445623   14592 kapi.go:107] duration metric: took 3.409745ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.419345ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-053741 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-053741 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e4d6a464-0d4d-494e-8d5d-5532c7b09026] Pending
helpers_test.go:352: "task-pv-pod" [e4d6a464-0d4d-494e-8d5d-5532c7b09026] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e4d6a464-0d4d-494e-8d5d-5532c7b09026] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003285449s
addons_test.go:572: (dbg) Run:  kubectl --context addons-053741 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-053741 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-053741 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-053741 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-053741 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-053741 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-053741 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d9d78008-a790-4143-a126-ea2505e1f669] Pending
helpers_test.go:352: "task-pv-pod-restore" [d9d78008-a790-4143-a126-ea2505e1f669] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d9d78008-a790-4143-a126-ea2505e1f669] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004030082s
addons_test.go:614: (dbg) Run:  kubectl --context addons-053741 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-053741 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-053741 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (233.685251ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:34.125363   28403 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:34.125679   28403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:34.125689   28403 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:34.125693   28403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:34.125899   28403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:34.126164   28403 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:34.126491   28403 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:34.126505   28403 addons.go:606] checking whether the cluster is paused
	I1020 11:59:34.126582   28403 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:34.126593   28403 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:34.126950   28403 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:34.145328   28403 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:34.145389   28403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:34.165051   28403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:34.263210   28403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:34.263271   28403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:34.292871   28403 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:34.292893   28403 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:34.292897   28403 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:34.292900   28403 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:34.292905   28403 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:34.292908   28403 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:34.292911   28403 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:34.292914   28403 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:34.292916   28403 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:34.292921   28403 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:34.292924   28403 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:34.292926   28403 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:34.292929   28403 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:34.292942   28403 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:34.292945   28403 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:34.292950   28403 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:34.292952   28403 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:34.292955   28403 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:34.292958   28403 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:34.292960   28403 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:34.292962   28403 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:34.292965   28403 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:34.292967   28403 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:34.292969   28403 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:34.292972   28403 cri.go:89] found id: ""
	I1020 11:59:34.293008   28403 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:34.306829   28403 out.go:203] 
	W1020 11:59:34.308392   28403 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:34.308412   28403 out.go:285] * 
	* 
	W1020 11:59:34.311558   28403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:34.312993   28403 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (231.863978ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:34.359365   28465 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:34.359517   28465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:34.359528   28465 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:34.359533   28465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:34.359743   28465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:34.360005   28465 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:34.360366   28465 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:34.360381   28465 addons.go:606] checking whether the cluster is paused
	I1020 11:59:34.360465   28465 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:34.360476   28465 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:34.360845   28465 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:34.379254   28465 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:34.379310   28465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:34.396878   28465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:34.496487   28465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:34.496562   28465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:34.524450   28465 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:34.524472   28465 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:34.524476   28465 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:34.524479   28465 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:34.524481   28465 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:34.524485   28465 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:34.524488   28465 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:34.524490   28465 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:34.524493   28465 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:34.524497   28465 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:34.524500   28465 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:34.524502   28465 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:34.524514   28465 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:34.524519   28465 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:34.524523   28465 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:34.524530   28465 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:34.524537   28465 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:34.524542   28465 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:34.524553   28465 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:34.524560   28465 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:34.524567   28465 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:34.524572   28465 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:34.524575   28465 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:34.524577   28465 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:34.524580   28465 cri.go:89] found id: ""
	I1020 11:59:34.524623   28465 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:34.538568   28465 out.go:203] 
	W1020 11:59:34.539938   28465 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:34.539967   28465 out.go:285] * 
	* 
	W1020 11:59:34.543384   28465 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:34.544806   28465 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-053741 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-053741 --alsologtostderr -v=1: exit status 11 (233.445501ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:58:52.986037   24606 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:58:52.986356   24606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:52.986367   24606 out.go:374] Setting ErrFile to fd 2...
	I1020 11:58:52.986370   24606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:52.986621   24606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:58:52.986953   24606 mustload.go:65] Loading cluster: addons-053741
	I1020 11:58:52.987304   24606 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:52.987319   24606 addons.go:606] checking whether the cluster is paused
	I1020 11:58:52.987411   24606 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:52.987424   24606 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:58:52.987838   24606 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:58:53.005528   24606 ssh_runner.go:195] Run: systemctl --version
	I1020 11:58:53.005576   24606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:58:53.022491   24606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:58:53.121815   24606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:58:53.121907   24606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:58:53.152064   24606 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:58:53.152096   24606 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:58:53.152101   24606 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:58:53.152106   24606 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:58:53.152109   24606 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:58:53.152115   24606 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:58:53.152118   24606 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:58:53.152122   24606 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:58:53.152125   24606 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:58:53.152139   24606 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:58:53.152143   24606 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:58:53.152147   24606 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:58:53.152151   24606 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:58:53.152156   24606 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:58:53.152160   24606 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:58:53.152170   24606 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:58:53.152178   24606 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:58:53.152184   24606 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:58:53.152188   24606 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:58:53.152192   24606 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:58:53.152196   24606 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:58:53.152200   24606 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:58:53.152203   24606 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:58:53.152206   24606 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:58:53.152209   24606 cri.go:89] found id: ""
	I1020 11:58:53.152270   24606 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:58:53.166340   24606 out.go:203] 
	W1020 11:58:53.167661   24606 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:58:53.167685   24606 out.go:285] * 
	* 
	W1020 11:58:53.170602   24606 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:58:53.171916   24606 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-053741 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-053741
helpers_test.go:243: (dbg) docker inspect addons-053741:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc",
	        "Created": "2025-10-20T11:56:32.897693096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16557,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T11:56:32.932133936Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/hosts",
	        "LogPath": "/var/lib/docker/containers/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc/e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc-json.log",
	        "Name": "/addons-053741",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-053741:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-053741",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4704220315bcf7ae375767b1335d99a845d360c014c2abd49eeaf1ca764cedc",
	                "LowerDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10cb3573eba5f3a4e587290f1b4c97305b0d9a2613f14accbd00160ea845cbf4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-053741",
	                "Source": "/var/lib/docker/volumes/addons-053741/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-053741",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-053741",
	                "name.minikube.sigs.k8s.io": "addons-053741",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3746cc6c75059c5c031ae9b0f2b8b0f935f28fde031ed5d409712924ccadc61e",
	            "SandboxKey": "/var/run/docker/netns/3746cc6c7505",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-053741": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:b1:d2:4c:50:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "af24c59a8b3649aed66b6500324487830ea6dc59f069d7c296b0e8ad05150727",
	                    "EndpointID": "4cb647936ac949ee914ddf3904d1f79047f4c63c913e9a5ed7835c6544c9681d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-053741",
	                        "e4704220315b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-053741 -n addons-053741
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-053741 logs -n 25: (1.113309151s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-611429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-611429   │ jenkins │ v1.37.0 │ 20 Oct 25 11:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ delete  │ -p download-only-611429                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-611429   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ start   │ -o=json --download-only -p download-only-877202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-877202   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ delete  │ -p download-only-877202                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-877202   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ delete  │ -p download-only-611429                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-611429   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ delete  │ -p download-only-877202                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-877202   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ start   │ --download-only -p download-docker-175079 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-175079 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ delete  │ -p download-docker-175079                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-175079 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ start   │ --download-only -p binary-mirror-253279 --alsologtostderr --binary-mirror http://127.0.0.1:42287 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-253279   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ delete  │ -p binary-mirror-253279                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-253279   │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ addons  │ disable dashboard -p addons-053741                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053741          │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-053741                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-053741          │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ start   │ -p addons-053741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-053741          │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:58 UTC │
	│ addons  │ addons-053741 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-053741          │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ addons-053741 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053741          │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-053741 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-053741          │ jenkins │ v1.37.0 │ 20 Oct 25 11:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:56:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:56:08.612168   15900 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:56:08.612405   15900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:08.612413   15900 out.go:374] Setting ErrFile to fd 2...
	I1020 11:56:08.612417   15900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:08.612604   15900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:56:08.613147   15900 out.go:368] Setting JSON to false
	I1020 11:56:08.613922   15900 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2318,"bootTime":1760959051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:56:08.614006   15900 start.go:141] virtualization: kvm guest
	I1020 11:56:08.616230   15900 out.go:179] * [addons-053741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 11:56:08.617876   15900 notify.go:220] Checking for updates...
	I1020 11:56:08.617903   15900 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 11:56:08.619578   15900 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:56:08.621112   15900 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 11:56:08.622473   15900 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 11:56:08.623967   15900 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 11:56:08.625562   15900 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 11:56:08.627114   15900 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:56:08.650451   15900 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 11:56:08.650537   15900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:56:08.707089   15900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-20 11:56:08.697856355 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:56:08.707190   15900 docker.go:318] overlay module found
	I1020 11:56:08.709194   15900 out.go:179] * Using the docker driver based on user configuration
	I1020 11:56:08.710436   15900 start.go:305] selected driver: docker
	I1020 11:56:08.710450   15900 start.go:925] validating driver "docker" against <nil>
	I1020 11:56:08.710460   15900 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 11:56:08.711011   15900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:56:08.772032   15900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-20 11:56:08.762946483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:56:08.772250   15900 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:56:08.772446   15900 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 11:56:08.774323   15900 out.go:179] * Using Docker driver with root privileges
	I1020 11:56:08.775754   15900 cni.go:84] Creating CNI manager for ""
	I1020 11:56:08.775840   15900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:56:08.775855   15900 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 11:56:08.775917   15900 start.go:349] cluster config:
	{Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1020 11:56:08.777626   15900 out.go:179] * Starting "addons-053741" primary control-plane node in "addons-053741" cluster
	I1020 11:56:08.779161   15900 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 11:56:08.780552   15900 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 11:56:08.782012   15900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:56:08.782057   15900 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 11:56:08.782068   15900 cache.go:58] Caching tarball of preloaded images
	I1020 11:56:08.782089   15900 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 11:56:08.782172   15900 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 11:56:08.782187   15900 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 11:56:08.782544   15900 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/config.json ...
	I1020 11:56:08.782573   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/config.json: {Name:mka0af212ef52bccd2f81f1166643cbe60e0e889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:08.799003   15900 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 11:56:08.799153   15900 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 11:56:08.799172   15900 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1020 11:56:08.799176   15900 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1020 11:56:08.799183   15900 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1020 11:56:08.799191   15900 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1020 11:56:21.400635   15900 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1020 11:56:21.400671   15900 cache.go:232] Successfully downloaded all kic artifacts
	I1020 11:56:21.400711   15900 start.go:360] acquireMachinesLock for addons-053741: {Name:mkcdccf6181f0e4e87f181300157c2558692b419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:56:21.400829   15900 start.go:364] duration metric: took 99.997µs to acquireMachinesLock for "addons-053741"
	I1020 11:56:21.400855   15900 start.go:93] Provisioning new machine with config: &{Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 11:56:21.400920   15900 start.go:125] createHost starting for "" (driver="docker")
	I1020 11:56:21.403625   15900 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1020 11:56:21.403881   15900 start.go:159] libmachine.API.Create for "addons-053741" (driver="docker")
	I1020 11:56:21.403914   15900 client.go:168] LocalClient.Create starting
	I1020 11:56:21.404040   15900 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 11:56:21.569633   15900 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 11:56:21.629718   15900 cli_runner.go:164] Run: docker network inspect addons-053741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 11:56:21.647044   15900 cli_runner.go:211] docker network inspect addons-053741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 11:56:21.647138   15900 network_create.go:284] running [docker network inspect addons-053741] to gather additional debugging logs...
	I1020 11:56:21.647163   15900 cli_runner.go:164] Run: docker network inspect addons-053741
	W1020 11:56:21.663226   15900 cli_runner.go:211] docker network inspect addons-053741 returned with exit code 1
	I1020 11:56:21.663255   15900 network_create.go:287] error running [docker network inspect addons-053741]: docker network inspect addons-053741: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-053741 not found
	I1020 11:56:21.663287   15900 network_create.go:289] output of [docker network inspect addons-053741]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-053741 not found
	
	** /stderr **
	I1020 11:56:21.663432   15900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 11:56:21.680812   15900 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc2990}
	I1020 11:56:21.680846   15900 network_create.go:124] attempt to create docker network addons-053741 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1020 11:56:21.680893   15900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-053741 addons-053741
	I1020 11:56:21.738321   15900 network_create.go:108] docker network addons-053741 192.168.49.0/24 created
	I1020 11:56:21.738351   15900 kic.go:121] calculated static IP "192.168.49.2" for the "addons-053741" container
	I1020 11:56:21.738416   15900 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 11:56:21.755352   15900 cli_runner.go:164] Run: docker volume create addons-053741 --label name.minikube.sigs.k8s.io=addons-053741 --label created_by.minikube.sigs.k8s.io=true
	I1020 11:56:21.773712   15900 oci.go:103] Successfully created a docker volume addons-053741
	I1020 11:56:21.773815   15900 cli_runner.go:164] Run: docker run --rm --name addons-053741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053741 --entrypoint /usr/bin/test -v addons-053741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 11:56:28.454707   15900 cli_runner.go:217] Completed: docker run --rm --name addons-053741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053741 --entrypoint /usr/bin/test -v addons-053741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.68084714s)
	I1020 11:56:28.454737   15900 oci.go:107] Successfully prepared a docker volume addons-053741
	I1020 11:56:28.454758   15900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:56:28.454793   15900 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 11:56:28.454852   15900 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 11:56:32.822532   15900 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.367634417s)
	I1020 11:56:32.822565   15900 kic.go:203] duration metric: took 4.367769587s to extract preloaded images to volume ...
	W1020 11:56:32.822646   15900 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 11:56:32.822674   15900 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 11:56:32.822704   15900 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 11:56:32.880379   15900 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-053741 --name addons-053741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-053741 --network addons-053741 --ip 192.168.49.2 --volume addons-053741:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 11:56:33.177594   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Running}}
	I1020 11:56:33.198330   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:33.216361   15900 cli_runner.go:164] Run: docker exec addons-053741 stat /var/lib/dpkg/alternatives/iptables
	I1020 11:56:33.269151   15900 oci.go:144] the created container "addons-053741" has a running status.
	I1020 11:56:33.269191   15900 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa...
	I1020 11:56:33.364299   15900 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 11:56:33.390827   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:33.410237   15900 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 11:56:33.410262   15900 kic_runner.go:114] Args: [docker exec --privileged addons-053741 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 11:56:33.467110   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:33.485438   15900 machine.go:93] provisionDockerMachine start ...
	I1020 11:56:33.485546   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:33.510914   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:33.511236   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:33.511261   15900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 11:56:33.511996   15900 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60374->127.0.0.1:32768: read: connection reset by peer
	I1020 11:56:36.653764   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-053741
	
	I1020 11:56:36.653805   15900 ubuntu.go:182] provisioning hostname "addons-053741"
	I1020 11:56:36.653877   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:36.672258   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:36.672467   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:36.672479   15900 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-053741 && echo "addons-053741" | sudo tee /etc/hostname
	I1020 11:56:36.820309   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-053741
	
	I1020 11:56:36.820382   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:36.838814   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:36.839024   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:36.839063   15900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-053741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-053741/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-053741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 11:56:36.979642   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 11:56:36.979674   15900 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 11:56:36.979741   15900 ubuntu.go:190] setting up certificates
	I1020 11:56:36.979757   15900 provision.go:84] configureAuth start
	I1020 11:56:36.979858   15900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053741
	I1020 11:56:36.997801   15900 provision.go:143] copyHostCerts
	I1020 11:56:36.997873   15900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 11:56:36.998026   15900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 11:56:36.998295   15900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 11:56:36.998453   15900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.addons-053741 san=[127.0.0.1 192.168.49.2 addons-053741 localhost minikube]
	I1020 11:56:37.209917   15900 provision.go:177] copyRemoteCerts
	I1020 11:56:37.209974   15900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 11:56:37.210008   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.227650   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.327458   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 11:56:37.347387   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 11:56:37.365572   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 11:56:37.382496   15900 provision.go:87] duration metric: took 402.72045ms to configureAuth
	I1020 11:56:37.382522   15900 ubuntu.go:206] setting minikube options for container-runtime
	I1020 11:56:37.382711   15900 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:56:37.382946   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.400314   15900 main.go:141] libmachine: Using SSH client type: native
	I1020 11:56:37.400533   15900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1020 11:56:37.400550   15900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 11:56:37.644520   15900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 11:56:37.644543   15900 machine.go:96] duration metric: took 4.159082513s to provisionDockerMachine
	I1020 11:56:37.644552   15900 client.go:171] duration metric: took 16.240629628s to LocalClient.Create
	I1020 11:56:37.644569   15900 start.go:167] duration metric: took 16.240689069s to libmachine.API.Create "addons-053741"
	I1020 11:56:37.644576   15900 start.go:293] postStartSetup for "addons-053741" (driver="docker")
	I1020 11:56:37.644588   15900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 11:56:37.644659   15900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 11:56:37.644711   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.662145   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.762963   15900 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 11:56:37.766567   15900 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 11:56:37.766600   15900 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 11:56:37.766611   15900 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 11:56:37.766666   15900 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 11:56:37.766692   15900 start.go:296] duration metric: took 122.111181ms for postStartSetup
	I1020 11:56:37.766987   15900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053741
	I1020 11:56:37.784302   15900 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/config.json ...
	I1020 11:56:37.784566   15900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 11:56:37.784604   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.802863   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.899938   15900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 11:56:37.904427   15900 start.go:128] duration metric: took 16.50349315s to createHost
	I1020 11:56:37.904454   15900 start.go:83] releasing machines lock for "addons-053741", held for 16.503610565s
	I1020 11:56:37.904522   15900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053741
	I1020 11:56:37.923025   15900 ssh_runner.go:195] Run: cat /version.json
	I1020 11:56:37.923086   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.923097   15900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 11:56:37.923153   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:37.940921   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:37.941851   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:38.035941   15900 ssh_runner.go:195] Run: systemctl --version
	I1020 11:56:38.092493   15900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 11:56:38.126876   15900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 11:56:38.131693   15900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 11:56:38.131763   15900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 11:56:38.157060   15900 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 11:56:38.157080   15900 start.go:495] detecting cgroup driver to use...
	I1020 11:56:38.157126   15900 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 11:56:38.157169   15900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 11:56:38.173293   15900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 11:56:38.185953   15900 docker.go:218] disabling cri-docker service (if available) ...
	I1020 11:56:38.186005   15900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 11:56:38.201875   15900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 11:56:38.220964   15900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 11:56:38.302230   15900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 11:56:38.387724   15900 docker.go:234] disabling docker service ...
	I1020 11:56:38.387805   15900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 11:56:38.405793   15900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 11:56:38.418484   15900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 11:56:38.499921   15900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 11:56:38.579090   15900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 11:56:38.591470   15900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 11:56:38.605434   15900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 11:56:38.605499   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.616167   15900 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 11:56:38.616237   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.625346   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.634049   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.642541   15900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 11:56:38.650537   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.659161   15900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.672395   15900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:56:38.681063   15900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 11:56:38.688335   15900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1020 11:56:38.688395   15900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1020 11:56:38.700582   15900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 11:56:38.708605   15900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:56:38.787302   15900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 11:56:38.890184   15900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 11:56:38.890254   15900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 11:56:38.894134   15900 start.go:563] Will wait 60s for crictl version
	I1020 11:56:38.894182   15900 ssh_runner.go:195] Run: which crictl
	I1020 11:56:38.897679   15900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 11:56:38.920641   15900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 11:56:38.920744   15900 ssh_runner.go:195] Run: crio --version
	I1020 11:56:38.946597   15900 ssh_runner.go:195] Run: crio --version
	I1020 11:56:38.974916   15900 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 11:56:38.976447   15900 cli_runner.go:164] Run: docker network inspect addons-053741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 11:56:38.993556   15900 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1020 11:56:38.997500   15900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 11:56:39.007633   15900 kubeadm.go:883] updating cluster {Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 11:56:39.007742   15900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:56:39.007803   15900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 11:56:39.038259   15900 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 11:56:39.038278   15900 crio.go:433] Images already preloaded, skipping extraction
	I1020 11:56:39.038326   15900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 11:56:39.063265   15900 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 11:56:39.063292   15900 cache_images.go:85] Images are preloaded, skipping loading
	I1020 11:56:39.063299   15900 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1020 11:56:39.063389   15900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-053741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 11:56:39.063470   15900 ssh_runner.go:195] Run: crio config
	I1020 11:56:39.108082   15900 cni.go:84] Creating CNI manager for ""
	I1020 11:56:39.108106   15900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:56:39.108131   15900 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 11:56:39.108153   15900 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-053741 NodeName:addons-053741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 11:56:39.108271   15900 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-053741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 11:56:39.108329   15900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 11:56:39.116479   15900 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 11:56:39.116540   15900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 11:56:39.123925   15900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1020 11:56:39.135888   15900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 11:56:39.151125   15900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1020 11:56:39.163397   15900 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1020 11:56:39.167046   15900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 11:56:39.177028   15900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:56:39.252344   15900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 11:56:39.278283   15900 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741 for IP: 192.168.49.2
	I1020 11:56:39.278305   15900 certs.go:195] generating shared ca certs ...
	I1020 11:56:39.278328   15900 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.278440   15900 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 11:56:39.633836   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt ...
	I1020 11:56:39.633867   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt: {Name:mkd4283c49b35ab0b046ccb70ad96bfdc7ba8c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.634042   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key ...
	I1020 11:56:39.634058   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key: {Name:mk854c3edcef668e8b0061c2f1cf9591ba30304d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.634132   15900 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 11:56:39.896741   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt ...
	I1020 11:56:39.896780   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt: {Name:mkb7f4b59907f6c15f36fa85b6156fd4fe57bd77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.896944   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key ...
	I1020 11:56:39.896955   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key: {Name:mk76fe7029c9c20baac31bcfd9c786c4cca764ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:39.897031   15900 certs.go:257] generating profile certs ...
	I1020 11:56:39.897093   15900 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.key
	I1020 11:56:39.897107   15900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt with IP's: []
	I1020 11:56:40.165271   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt ...
	I1020 11:56:40.165301   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: {Name:mka976861148c42dfdc0036143c0f4cd4cb6de63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.165467   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.key ...
	I1020 11:56:40.165478   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.key: {Name:mk900e14c5d3af0870911210416b9178e8d9a8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.165551   15900 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13
	I1020 11:56:40.165570   15900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1020 11:56:40.409912   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13 ...
	I1020 11:56:40.409942   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13: {Name:mk3449df9f6180b42abef687c645d7f336841e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.410106   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13 ...
	I1020 11:56:40.410119   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13: {Name:mk39b06ded4a76287ad1d835919fb11a8bb60bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:40.411166   15900 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt.c0aa3d13 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt
	I1020 11:56:40.411276   15900 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key.c0aa3d13 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key
	I1020 11:56:40.411331   15900 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key
	I1020 11:56:40.411350   15900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt with IP's: []
	I1020 11:56:41.106717   15900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt ...
	I1020 11:56:41.106745   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt: {Name:mkd51af5fe344c3ecc6fa772d38f7b9edd844154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:41.106915   15900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key ...
	I1020 11:56:41.106927   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key: {Name:mkf4e7d0b0d92d3a97ffca1208025fcf09fe71cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:41.107108   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 11:56:41.107148   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 11:56:41.107171   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 11:56:41.107200   15900 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 11:56:41.107762   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 11:56:41.125227   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 11:56:41.142141   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 11:56:41.159089   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 11:56:41.175606   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 11:56:41.192389   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 11:56:41.209029   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 11:56:41.225730   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 11:56:41.242056   15900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 11:56:41.260749   15900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 11:56:41.272580   15900 ssh_runner.go:195] Run: openssl version
	I1020 11:56:41.278356   15900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 11:56:41.288871   15900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:56:41.292455   15900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:56:41.292504   15900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:56:41.326132   15900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 11:56:41.334786   15900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 11:56:41.338479   15900 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 11:56:41.338535   15900 kubeadm.go:400] StartCluster: {Name:addons-053741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 11:56:41.338628   15900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:56:41.338699   15900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:56:41.363960   15900 cri.go:89] found id: ""
	I1020 11:56:41.364025   15900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 11:56:41.372275   15900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 11:56:41.380307   15900 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 11:56:41.380363   15900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 11:56:41.388387   15900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 11:56:41.388402   15900 kubeadm.go:157] found existing configuration files:
	
	I1020 11:56:41.388441   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 11:56:41.396109   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 11:56:41.396167   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 11:56:41.403569   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 11:56:41.411757   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 11:56:41.411847   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 11:56:41.419571   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 11:56:41.427339   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 11:56:41.427399   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 11:56:41.434628   15900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 11:56:41.442111   15900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 11:56:41.442158   15900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 11:56:41.449535   15900 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 11:56:41.485364   15900 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 11:56:41.485431   15900 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 11:56:41.518569   15900 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 11:56:41.518664   15900 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 11:56:41.518741   15900 kubeadm.go:318] OS: Linux
	I1020 11:56:41.518847   15900 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 11:56:41.518945   15900 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 11:56:41.519023   15900 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 11:56:41.519116   15900 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 11:56:41.519186   15900 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 11:56:41.519275   15900 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 11:56:41.519366   15900 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 11:56:41.519449   15900 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 11:56:41.576131   15900 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 11:56:41.576300   15900 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 11:56:41.576455   15900 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 11:56:41.582894   15900 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 11:56:41.585213   15900 out.go:252]   - Generating certificates and keys ...
	I1020 11:56:41.585329   15900 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 11:56:41.585445   15900 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 11:56:41.755942   15900 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 11:56:41.890346   15900 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 11:56:42.138384   15900 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 11:56:42.584259   15900 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 11:56:42.787960   15900 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 11:56:42.788074   15900 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-053741 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 11:56:42.854691   15900 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 11:56:42.854834   15900 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-053741 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 11:56:43.062204   15900 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 11:56:43.203840   15900 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 11:56:43.800169   15900 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 11:56:43.800261   15900 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 11:56:43.886198   15900 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 11:56:44.496897   15900 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 11:56:44.744014   15900 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 11:56:45.092185   15900 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 11:56:45.287755   15900 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 11:56:45.288257   15900 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 11:56:45.291983   15900 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 11:56:45.293807   15900 out.go:252]   - Booting up control plane ...
	I1020 11:56:45.293913   15900 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 11:56:45.294013   15900 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 11:56:45.294567   15900 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 11:56:45.322076   15900 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 11:56:45.322226   15900 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 11:56:45.328966   15900 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 11:56:45.329127   15900 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 11:56:45.329217   15900 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 11:56:45.426306   15900 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 11:56:45.426451   15900 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 11:56:45.928113   15900 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.845905ms
	I1020 11:56:45.931944   15900 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 11:56:45.932080   15900 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1020 11:56:45.932199   15900 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 11:56:45.932328   15900 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 11:56:47.702110   15900 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.769332644s
	I1020 11:56:47.972060   15900 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.040133633s
	I1020 11:56:49.433965   15900 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501999515s
	I1020 11:56:49.444945   15900 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 11:56:49.455143   15900 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 11:56:49.463026   15900 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 11:56:49.463303   15900 kubeadm.go:318] [mark-control-plane] Marking the node addons-053741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 11:56:49.471015   15900 kubeadm.go:318] [bootstrap-token] Using token: z27odz.nb33zoome7hq0gb4
	I1020 11:56:49.472384   15900 out.go:252]   - Configuring RBAC rules ...
	I1020 11:56:49.472533   15900 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 11:56:49.474999   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 11:56:49.479729   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 11:56:49.482295   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 11:56:49.484625   15900 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 11:56:49.488132   15900 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 11:56:49.840233   15900 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 11:56:50.268040   15900 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 11:56:50.840597   15900 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 11:56:50.841641   15900 kubeadm.go:318] 
	I1020 11:56:50.841719   15900 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 11:56:50.841727   15900 kubeadm.go:318] 
	I1020 11:56:50.841826   15900 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 11:56:50.841836   15900 kubeadm.go:318] 
	I1020 11:56:50.841857   15900 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 11:56:50.841910   15900 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 11:56:50.841952   15900 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 11:56:50.841958   15900 kubeadm.go:318] 
	I1020 11:56:50.842000   15900 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 11:56:50.842006   15900 kubeadm.go:318] 
	I1020 11:56:50.842043   15900 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 11:56:50.842049   15900 kubeadm.go:318] 
	I1020 11:56:50.842095   15900 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 11:56:50.842162   15900 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 11:56:50.842218   15900 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 11:56:50.842224   15900 kubeadm.go:318] 
	I1020 11:56:50.842294   15900 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 11:56:50.842388   15900 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 11:56:50.842408   15900 kubeadm.go:318] 
	I1020 11:56:50.842534   15900 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z27odz.nb33zoome7hq0gb4 \
	I1020 11:56:50.842639   15900 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 11:56:50.842659   15900 kubeadm.go:318] 	--control-plane 
	I1020 11:56:50.842663   15900 kubeadm.go:318] 
	I1020 11:56:50.842791   15900 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 11:56:50.842800   15900 kubeadm.go:318] 
	I1020 11:56:50.842867   15900 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z27odz.nb33zoome7hq0gb4 \
	I1020 11:56:50.842959   15900 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 11:56:50.845031   15900 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 11:56:50.845135   15900 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 11:56:50.845159   15900 cni.go:84] Creating CNI manager for ""
	I1020 11:56:50.845180   15900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:56:50.847478   15900 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 11:56:50.848702   15900 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 11:56:50.852848   15900 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 11:56:50.852865   15900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 11:56:50.865320   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 11:56:51.068632   15900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 11:56:51.068736   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:51.068816   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-053741 minikube.k8s.io/updated_at=2025_10_20T11_56_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=addons-053741 minikube.k8s.io/primary=true
	I1020 11:56:51.144747   15900 ops.go:34] apiserver oom_adj: -16
	I1020 11:56:51.144783   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:51.645544   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:52.145575   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:52.645093   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:53.145034   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:53.645438   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:54.145021   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:54.645657   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:55.145874   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:55.645922   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:56.144876   15900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:56:56.206916   15900 kubeadm.go:1113] duration metric: took 5.138235066s to wait for elevateKubeSystemPrivileges
	I1020 11:56:56.206956   15900 kubeadm.go:402] duration metric: took 14.86842521s to StartCluster
	I1020 11:56:56.206977   15900 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:56.207133   15900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 11:56:56.207624   15900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:56.207816   15900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 11:56:56.207871   15900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 11:56:56.207915   15900 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1020 11:56:56.208033   15900 addons.go:69] Setting yakd=true in profile "addons-053741"
	I1020 11:56:56.208040   15900 addons.go:69] Setting gcp-auth=true in profile "addons-053741"
	I1020 11:56:56.208076   15900 mustload.go:65] Loading cluster: addons-053741
	I1020 11:56:56.208078   15900 addons.go:238] Setting addon yakd=true in "addons-053741"
	I1020 11:56:56.208112   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208104   15900 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-053741"
	I1020 11:56:56.208121   15900 addons.go:69] Setting registry=true in profile "addons-053741"
	I1020 11:56:56.208156   15900 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-053741"
	I1020 11:56:56.208162   15900 addons.go:238] Setting addon registry=true in "addons-053741"
	I1020 11:56:56.208170   15900 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:56:56.208222   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208227   15900 addons.go:69] Setting volcano=true in profile "addons-053741"
	I1020 11:56:56.208239   15900 addons.go:69] Setting cloud-spanner=true in profile "addons-053741"
	I1020 11:56:56.208250   15900 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-053741"
	I1020 11:56:56.208255   15900 addons.go:238] Setting addon cloud-spanner=true in "addons-053741"
	I1020 11:56:56.208270   15900 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:56:56.208290   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208308   15900 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-053741"
	I1020 11:56:56.208330   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208365   15900 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-053741"
	I1020 11:56:56.208411   15900 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-053741"
	I1020 11:56:56.208444   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208473   15900 addons.go:69] Setting volumesnapshots=true in profile "addons-053741"
	I1020 11:56:56.208503   15900 addons.go:238] Setting addon volumesnapshots=true in "addons-053741"
	I1020 11:56:56.208528   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208559   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208567   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208642   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208712   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208729   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208929   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208963   15900 addons.go:69] Setting inspektor-gadget=true in profile "addons-053741"
	I1020 11:56:56.208985   15900 addons.go:238] Setting addon inspektor-gadget=true in "addons-053741"
	I1020 11:56:56.209006   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.209019   15900 addons.go:69] Setting ingress=true in profile "addons-053741"
	I1020 11:56:56.209036   15900 addons.go:238] Setting addon ingress=true in "addons-053741"
	I1020 11:56:56.209057   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.209375   15900 addons.go:69] Setting ingress-dns=true in profile "addons-053741"
	I1020 11:56:56.209420   15900 addons.go:238] Setting addon ingress-dns=true in "addons-053741"
	I1020 11:56:56.209454   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.209950   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.210140   15900 out.go:179] * Verifying Kubernetes components...
	I1020 11:56:56.208241   15900 addons.go:238] Setting addon volcano=true in "addons-053741"
	I1020 11:56:56.210482   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.210609   15900 addons.go:69] Setting storage-provisioner=true in profile "addons-053741"
	I1020 11:56:56.211350   15900 addons.go:238] Setting addon storage-provisioner=true in "addons-053741"
	I1020 11:56:56.211386   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.208929   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.209011   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.211875   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.210966   15900 addons.go:69] Setting default-storageclass=true in profile "addons-053741"
	I1020 11:56:56.212056   15900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-053741"
	I1020 11:56:56.210997   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.208951   15900 addons.go:69] Setting registry-creds=true in profile "addons-053741"
	I1020 11:56:56.212368   15900 addons.go:238] Setting addon registry-creds=true in "addons-053741"
	I1020 11:56:56.212402   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.212865   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.212989   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.213146   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.214797   15900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:56:56.211104   15900 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-053741"
	I1020 11:56:56.215185   15900 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-053741"
	I1020 11:56:56.215227   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.211113   15900 addons.go:69] Setting metrics-server=true in profile "addons-053741"
	I1020 11:56:56.215441   15900 addons.go:238] Setting addon metrics-server=true in "addons-053741"
	I1020 11:56:56.215475   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.215767   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.216284   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.221416   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.264033   15900 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1020 11:56:56.265376   15900 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 11:56:56.265396   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1020 11:56:56.265459   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.265644   15900 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1020 11:56:56.266857   15900 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1020 11:56:56.268164   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1020 11:56:56.268214   15900 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1020 11:56:56.268270   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.268304   15900 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 11:56:56.268313   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1020 11:56:56.268361   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.281524   15900 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-053741"
	I1020 11:56:56.281610   15900 host.go:66] Checking if "addons-053741" exists ...
	W1020 11:56:56.292947   15900 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1020 11:56:56.296294   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.298994   15900 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1020 11:56:56.300289   15900 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 11:56:56.300351   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 11:56:56.300592   15900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1020 11:56:56.300689   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.301937   15900 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 11:56:56.301956   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 11:56:56.302022   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.302304   15900 addons.go:238] Setting addon default-storageclass=true in "addons-053741"
	I1020 11:56:56.302344   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.302955   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:56:56.307626   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1020 11:56:56.308939   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:56:56.308957   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1020 11:56:56.308975   15900 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1020 11:56:56.309062   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.311968   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1020 11:56:56.312143   15900 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1020 11:56:56.312172   15900 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1020 11:56:56.317195   15900 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1020 11:56:56.317450   15900 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1020 11:56:56.317468   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1020 11:56:56.317532   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.317724   15900 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 11:56:56.318202   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1020 11:56:56.318266   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.321932   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1020 11:56:56.322465   15900 out.go:179]   - Using image docker.io/registry:3.0.0
	I1020 11:56:56.324237   15900 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1020 11:56:56.324261   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1020 11:56:56.324329   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.324498   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1020 11:56:56.326412   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1020 11:56:56.327808   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1020 11:56:56.329224   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1020 11:56:56.330426   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1020 11:56:56.331853   15900 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1020 11:56:56.335076   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1020 11:56:56.335095   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1020 11:56:56.335161   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.352956   15900 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1020 11:56:56.354155   15900 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 11:56:56.354177   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1020 11:56:56.354249   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.354448   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.360832   15900 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1020 11:56:56.360909   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.366059   15900 out.go:179]   - Using image docker.io/busybox:stable
	I1020 11:56:56.367924   15900 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 11:56:56.368051   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1020 11:56:56.368115   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.370948   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.373639   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:56:56.375513   15900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 11:56:56.377151   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1020 11:56:56.378490   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:56:56.379229   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.380117   15900 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 11:56:56.381836   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1020 11:56:56.381974   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.383336   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.385032   15900 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1020 11:56:56.386593   15900 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1020 11:56:56.386612   15900 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1020 11:56:56.386672   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.388876   15900 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 11:56:56.388896   15900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 11:56:56.388944   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:56:56.389177   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.397648   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.399763   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.420010   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.420562   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.425997   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.428788   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.438384   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.447953   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	W1020 11:56:56.451813   15900 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1020 11:56:56.451851   15900 retry.go:31] will retry after 165.090909ms: ssh: handshake failed: EOF
	I1020 11:56:56.459026   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:56:56.459861   15900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 11:56:56.538299   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1020 11:56:56.538327   15900 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1020 11:56:56.539120   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 11:56:56.539140   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1020 11:56:56.556590   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1020 11:56:56.556618   15900 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1020 11:56:56.558157   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 11:56:56.558177   15900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1020 11:56:56.559648   15900 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1020 11:56:56.559669   15900 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1020 11:56:56.575000   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 11:56:56.575460   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 11:56:56.578551   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 11:56:56.585381   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 11:56:56.588684   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 11:56:56.593480   15900 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 11:56:56.593503   15900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1020 11:56:56.596104   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1020 11:56:56.596128   15900 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1020 11:56:56.598673   15900 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1020 11:56:56.598695   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1020 11:56:56.601674   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1020 11:56:56.601696   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1020 11:56:56.603298   15900 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1020 11:56:56.603315   15900 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1020 11:56:56.610341   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1020 11:56:56.612819   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 11:56:56.613942   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 11:56:56.631547   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 11:56:56.640752   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 11:56:56.652713   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1020 11:56:56.652744   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1020 11:56:56.654763   15900 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1020 11:56:56.654807   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1020 11:56:56.656734   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1020 11:56:56.664368   15900 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1020 11:56:56.664412   15900 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1020 11:56:56.695428   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1020 11:56:56.718978   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1020 11:56:56.719013   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1020 11:56:56.729581   15900 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1020 11:56:56.729606   15900 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1020 11:56:56.780661   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1020 11:56:56.780687   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1020 11:56:56.785588   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1020 11:56:56.785616   15900 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1020 11:56:56.839235   15900 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1020 11:56:56.839266   15900 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1020 11:56:56.851908   15900 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:56:56.851979   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1020 11:56:56.862904   15900 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:56.862981   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1020 11:56:56.865097   15900 node_ready.go:35] waiting up to 6m0s for node "addons-053741" to be "Ready" ...
	I1020 11:56:56.865840   15900 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1020 11:56:56.915368   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:56:56.916939   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1020 11:56:56.916962   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1020 11:56:56.943591   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:56.969926   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1020 11:56:56.969953   15900 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1020 11:56:57.028393   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1020 11:56:57.028418   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1020 11:56:57.078629   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1020 11:56:57.078654   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1020 11:56:57.110148   15900 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 11:56:57.110177   15900 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1020 11:56:57.150813   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 11:56:57.374763   15900 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-053741" context rescaled to 1 replicas
	W1020 11:56:57.502373   15900 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1020 11:56:57.700537   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.068945002s)
	I1020 11:56:57.700585   15900 addons.go:479] Verifying addon ingress=true in "addons-053741"
	I1020 11:56:57.700689   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.043930794s)
	I1020 11:56:57.700717   15900 addons.go:479] Verifying addon registry=true in "addons-053741"
	I1020 11:56:57.700647   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.059830793s)
	I1020 11:56:57.700756   15900 addons.go:479] Verifying addon metrics-server=true in "addons-053741"
	I1020 11:56:57.700757   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.005292254s)
	I1020 11:56:57.703231   15900 out.go:179] * Verifying ingress addon...
	I1020 11:56:57.703236   15900 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-053741 service yakd-dashboard -n yakd-dashboard
	
	I1020 11:56:57.703232   15900 out.go:179] * Verifying registry addon...
	I1020 11:56:57.706216   15900 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1020 11:56:57.706251   15900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1020 11:56:57.708373   15900 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 11:56:57.708441   15900 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 11:56:57.708457   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:58.181308   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.265883631s)
	W1020 11:56:58.181375   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 11:56:58.181401   15900 retry.go:31] will retry after 342.405105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 11:56:58.181427   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237798704s)
	W1020 11:56:58.181461   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:58.181478   15900 retry.go:31] will retry after 324.093183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:58.181646   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.030791795s)
	I1020 11:56:58.181671   15900 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-053741"
	I1020 11:56:58.183958   15900 out.go:179] * Verifying csi-hostpath-driver addon...
	I1020 11:56:58.186618   15900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1020 11:56:58.189360   15900 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 11:56:58.189376   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:58.290393   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:58.290464   15900 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 11:56:58.290486   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:56:58.506491   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:58.524402   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:56:58.689518   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:58.709702   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:58.709759   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:56:58.867858   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	W1020 11:56:59.059812   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:59.059858   15900 retry.go:31] will retry after 336.62005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:56:59.189672   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:59.208897   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:59.209077   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:56:59.396834   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:56:59.689941   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:56:59.709627   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:56:59.709683   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:00.189643   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:00.209168   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:00.209309   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:00.689570   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:00.708886   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:00.709075   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:00.869727   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:01.012464   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.48801625s)
	I1020 11:57:01.012518   15900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.615648067s)
	W1020 11:57:01.012558   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:01.012587   15900 retry.go:31] will retry after 784.185305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:01.190166   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:01.209841   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:01.209927   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:01.689856   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:01.709433   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:01.709492   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:01.797574   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:02.190568   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:02.209002   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:02.209196   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:02.323839   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:02.323872   15900 retry.go:31] will retry after 1.261898765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:02.690544   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:02.709088   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:02.709158   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:03.189382   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:03.208732   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:03.208863   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:03.368263   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:03.585957   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:03.691285   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:03.709641   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:03.709861   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:03.914793   15900 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1020 11:57:03.914863   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:57:03.935080   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:57:04.047292   15900 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1020 11:57:04.060298   15900 addons.go:238] Setting addon gcp-auth=true in "addons-053741"
	I1020 11:57:04.060357   15900 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:57:04.060910   15900 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:57:04.081596   15900 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1020 11:57:04.081647   15900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:57:04.099795   15900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	W1020 11:57:04.122353   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:04.122386   15900 retry.go:31] will retry after 1.737686992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:04.190844   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:04.197300   15900 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:57:04.198686   15900 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1020 11:57:04.199904   15900 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1020 11:57:04.199916   15900 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1020 11:57:04.209751   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:04.209974   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:04.213147   15900 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1020 11:57:04.213160   15900 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1020 11:57:04.225371   15900 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 11:57:04.225387   15900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1020 11:57:04.237309   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 11:57:04.537986   15900 addons.go:479] Verifying addon gcp-auth=true in "addons-053741"
	I1020 11:57:04.539499   15900 out.go:179] * Verifying gcp-auth addon...
	I1020 11:57:04.541644   15900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1020 11:57:04.543933   15900 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1020 11:57:04.543953   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:04.689652   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:04.708983   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:04.709188   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:05.044415   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:05.189826   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:05.209324   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:05.209382   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:05.544536   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:05.690150   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:05.709804   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:05.709975   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:05.861132   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1020 11:57:05.867979   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:06.045115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:06.190522   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:06.209724   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:06.209800   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:06.391176   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:06.391201   15900 retry.go:31] will retry after 1.786080326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:06.544680   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:06.689394   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:06.708721   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:06.708759   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:07.044989   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:07.189618   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:07.209050   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:07.209150   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:07.545390   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:07.690085   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:07.709800   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:07.710020   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:07.868336   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:08.045033   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:08.178221   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:08.189791   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:08.209509   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:08.209585   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:08.544885   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:08.689800   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:08.708911   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:08.709046   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:08.709272   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:08.709318   15900 retry.go:31] will retry after 3.484695849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:09.044211   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:09.189765   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:09.209428   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:09.209568   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:09.545350   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:09.689946   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:09.709517   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:09.709749   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:10.044945   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:10.189964   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:10.209375   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:10.209501   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:10.368105   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:10.544631   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:10.690037   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:10.709525   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:10.709848   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:11.045176   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:11.189620   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:11.209290   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:11.209462   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:11.544627   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:11.689458   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:11.708975   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:11.709075   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:12.045203   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:12.189867   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:12.194887   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:12.208949   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:12.209090   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:12.369802   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:12.544973   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:12.690140   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:12.709422   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:12.709421   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:12.743433   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:12.743464   15900 retry.go:31] will retry after 3.332044795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:13.045006   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:13.189640   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:13.209491   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:13.209600   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:13.544917   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:13.689743   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:13.709394   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:13.709491   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:14.044746   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:14.189902   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:14.209384   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:14.209546   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:14.544610   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:14.690075   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:14.709558   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:14.709614   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:14.868313   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:15.044951   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:15.189547   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:15.209107   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:15.209334   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:15.544300   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:15.689999   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:15.709572   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:15.709788   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:16.045355   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:16.076495   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:16.190244   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:16.209748   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:16.209936   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:16.544910   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:57:16.607182   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:16.607208   15900 retry.go:31] will retry after 5.617223216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:16.689726   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:16.709364   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:16.709468   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:17.044536   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:17.190208   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:17.209554   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:17.209739   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:17.368282   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:17.544757   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:17.689377   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:17.709091   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:17.709120   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:18.044891   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:18.190180   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:18.209602   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:18.209618   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:18.544848   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:18.690000   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:18.709406   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:18.709488   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:19.045135   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:19.189524   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:19.209191   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:19.209262   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:19.544213   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:19.689723   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:19.709087   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:19.709251   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:19.867916   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:20.044734   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:20.190587   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:20.209056   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:20.209094   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:20.544070   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:20.689956   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:20.709585   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:20.709630   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:21.045222   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:21.190341   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:21.208854   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:21.209003   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:21.544920   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:21.689313   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:21.708517   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:21.708686   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:21.868139   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:22.045147   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:22.189714   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:22.209433   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:22.209556   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:22.224640   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:22.544995   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:22.689083   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:22.708826   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:22.708964   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:22.753099   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:22.753134   15900 retry.go:31] will retry after 6.164580225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:23.044329   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:23.189876   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:23.209472   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:23.209628   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:23.544486   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:23.689046   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:23.709645   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:23.709805   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:24.044707   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:24.189166   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:24.209568   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:24.209606   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:24.368169   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:24.544584   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:24.689887   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:24.709156   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:24.709404   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:25.045150   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:25.189751   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:25.209203   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:25.209391   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:25.544542   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:25.690187   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:25.709634   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:25.709709   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:26.045034   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:26.189880   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:26.209133   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:26.209370   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:26.544738   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:26.689269   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:26.709541   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:26.709668   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:26.868147   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:27.044746   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:27.189291   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:27.208701   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:27.208844   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:27.544972   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:27.689226   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:27.709620   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:27.709795   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:28.044940   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:28.189610   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:28.208887   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:28.209108   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:28.545292   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:28.690054   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:28.709710   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:28.709862   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:28.868259   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:28.918406   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:29.044732   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:29.189204   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:29.209969   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:29.210028   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:29.444724   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:29.444753   15900 retry.go:31] will retry after 13.378716535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:29.544837   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:29.689584   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:29.709148   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:29.709330   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:30.045308   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:30.189714   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:30.209238   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:30.209252   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:30.544152   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:30.689802   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:30.709331   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:30.709548   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:31.044182   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:31.189937   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:31.209400   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:31.209588   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:31.368171   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:31.544584   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:31.689165   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:31.709958   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:31.710028   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:32.045216   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:32.189620   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:32.209144   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:32.209183   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:32.544051   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:32.689797   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:32.709071   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:32.709257   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:33.044113   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:33.189598   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:33.208871   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:33.209073   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:33.368806   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:33.544075   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:33.689674   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:33.709044   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:33.709218   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:34.044312   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:34.189912   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:34.209245   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:34.209450   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:34.544553   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:34.690031   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:34.709486   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:34.709600   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:35.044684   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:35.189153   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:35.209744   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:35.209800   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:35.544803   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:35.689439   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:35.710371   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:35.710551   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:57:35.867971   15900 node_ready.go:57] node "addons-053741" has "Ready":"False" status (will retry)
	I1020 11:57:36.044804   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:36.189310   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:36.209813   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:36.209852   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:36.544905   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:36.689407   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:36.708858   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:36.709019   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:37.045010   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:37.189614   15900 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 11:57:37.189635   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:37.209209   15900 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 11:57:37.209236   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:37.209280   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:37.370397   15900 node_ready.go:49] node "addons-053741" is "Ready"
	I1020 11:57:37.370428   15900 node_ready.go:38] duration metric: took 40.505294162s for node "addons-053741" to be "Ready" ...
	I1020 11:57:37.370442   15900 api_server.go:52] waiting for apiserver process to appear ...
	I1020 11:57:37.370492   15900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 11:57:37.393786   15900 api_server.go:72] duration metric: took 41.18586834s to wait for apiserver process to appear ...
	I1020 11:57:37.393873   15900 api_server.go:88] waiting for apiserver healthz status ...
	I1020 11:57:37.393910   15900 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1020 11:57:37.399317   15900 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1020 11:57:37.400412   15900 api_server.go:141] control plane version: v1.34.1
	I1020 11:57:37.400440   15900 api_server.go:131] duration metric: took 6.545915ms to wait for apiserver health ...
	I1020 11:57:37.400449   15900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 11:57:37.404906   15900 system_pods.go:59] 20 kube-system pods found
	I1020 11:57:37.404939   15900 system_pods.go:61] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:37.404947   15900 system_pods.go:61] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:37.404958   15900 system_pods.go:61] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:37.404964   15900 system_pods.go:61] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:37.404970   15900 system_pods.go:61] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:37.404987   15900 system_pods.go:61] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:37.404991   15900 system_pods.go:61] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:37.404995   15900 system_pods.go:61] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:37.404998   15900 system_pods.go:61] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:37.405003   15900 system_pods.go:61] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:37.405007   15900 system_pods.go:61] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:37.405014   15900 system_pods.go:61] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:37.405019   15900 system_pods.go:61] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:37.405028   15900 system_pods.go:61] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:37.405034   15900 system_pods.go:61] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:37.405040   15900 system_pods.go:61] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:37.405044   15900 system_pods.go:61] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:37.405051   15900 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.405059   15900 system_pods.go:61] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.405071   15900 system_pods.go:61] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:37.405087   15900 system_pods.go:74] duration metric: took 4.631925ms to wait for pod list to return data ...
	I1020 11:57:37.405096   15900 default_sa.go:34] waiting for default service account to be created ...
	I1020 11:57:37.407587   15900 default_sa.go:45] found service account: "default"
	I1020 11:57:37.407611   15900 default_sa.go:55] duration metric: took 2.508057ms for default service account to be created ...
	I1020 11:57:37.407621   15900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 11:57:37.505884   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:37.505914   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:37.505921   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:37.505928   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:37.505933   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:37.505939   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:37.505943   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:37.505951   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:37.505954   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:37.505958   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:37.505962   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:37.505968   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:37.505972   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:37.505977   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:37.505985   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:37.505993   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:37.506000   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:37.506007   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:37.506012   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.506020   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.506025   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:37.506042   15900 retry.go:31] will retry after 266.264055ms: missing components: kube-dns
	I1020 11:57:37.545394   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:37.691693   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:37.709866   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:37.709904   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:37.794565   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:37.794604   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:37.794615   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:37.794624   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:37.794643   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:37.794651   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:37.794656   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:37.794663   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:37.794668   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:37.794673   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:37.794684   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:37.794689   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:37.794697   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:37.794704   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:37.794710   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:37.794718   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:37.794755   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:37.794764   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:37.794783   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.794792   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:37.794800   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:37.794817   15900 retry.go:31] will retry after 253.82825ms: missing components: kube-dns
	I1020 11:57:38.045134   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:38.054965   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:38.055008   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:38.055020   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:57:38.055031   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:38.055038   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:38.055046   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:38.055051   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:38.055057   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:38.055062   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:38.055067   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:38.055086   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:38.055092   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:38.055098   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:38.055113   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:38.055122   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:38.055130   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:38.055138   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:38.055148   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:38.055156   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.055165   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.055172   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:57:38.055188   15900 retry.go:31] will retry after 360.959257ms: missing components: kube-dns
	I1020 11:57:38.191039   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:38.210309   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:38.210383   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:38.421546   15900 system_pods.go:86] 20 kube-system pods found
	I1020 11:57:38.421588   15900 system_pods.go:89] "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:57:38.421597   15900 system_pods.go:89] "coredns-66bc5c9577-ml6gb" [8b35f446-7459-4498-a912-fa6e117f71f5] Running
	I1020 11:57:38.421607   15900 system_pods.go:89] "csi-hostpath-attacher-0" [6b608358-4c84-492d-97f3-e8ccb5d5b09e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 11:57:38.421615   15900 system_pods.go:89] "csi-hostpath-resizer-0" [93a13756-bfb2-422a-a929-e9a0cc8f8b00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 11:57:38.421625   15900 system_pods.go:89] "csi-hostpathplugin-2k9f8" [8d69df41-9ab6-496d-8959-3fe9f44a1aa9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 11:57:38.421631   15900 system_pods.go:89] "etcd-addons-053741" [e767a9c0-87b1-4f85-86f4-42ed7f6b82de] Running
	I1020 11:57:38.421637   15900 system_pods.go:89] "kindnet-5mww7" [35b78024-fef5-4aac-a1b3-5067f6439b9d] Running
	I1020 11:57:38.421642   15900 system_pods.go:89] "kube-apiserver-addons-053741" [1edabbb7-6964-442b-a0fc-b297b3077a72] Running
	I1020 11:57:38.421647   15900 system_pods.go:89] "kube-controller-manager-addons-053741" [a2c978a8-06ad-42f1-94ca-e4c3690cbf9a] Running
	I1020 11:57:38.421655   15900 system_pods.go:89] "kube-ingress-dns-minikube" [ca86be76-bb82-4f7d-b35e-67d8413990b8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:57:38.421662   15900 system_pods.go:89] "kube-proxy-f9l25" [bba7f334-1395-4153-9043-fe4ee4ccdae3] Running
	I1020 11:57:38.421667   15900 system_pods.go:89] "kube-scheduler-addons-053741" [b4bc16d7-7daa-4dac-bebf-93d68a8eaf9b] Running
	I1020 11:57:38.421684   15900 system_pods.go:89] "metrics-server-85b7d694d7-5b2cn" [73e0c71b-f373-4a85-a740-1cf7beddfe80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:57:38.421692   15900 system_pods.go:89] "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:57:38.421699   15900 system_pods.go:89] "registry-6b586f9694-gb2mv" [da56cdd5-4eb6-423d-a493-a81e51cc6362] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:57:38.421707   15900 system_pods.go:89] "registry-creds-764b6fb674-6kcjl" [9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:57:38.421714   15900 system_pods.go:89] "registry-proxy-wfdh9" [2f070aaa-93de-4ff2-ac06-e2ff2df91120] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:57:38.421722   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2ztzp" [815a9a74-0bcb-4dfb-8f2d-9c12ffe9b97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.421732   15900 system_pods.go:89] "snapshot-controller-7d9fbc56b8-stswk" [aeb1275c-2ae5-49d2-846b-419229d0b6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:57:38.421737   15900 system_pods.go:89] "storage-provisioner" [5ea0010d-d95c-4399-88b1-f35b6822e8d5] Running
	I1020 11:57:38.421751   15900 system_pods.go:126] duration metric: took 1.014122806s to wait for k8s-apps to be running ...
	I1020 11:57:38.421762   15900 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 11:57:38.421825   15900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 11:57:38.439493   15900 system_svc.go:56] duration metric: took 17.722957ms WaitForService to wait for kubelet
	I1020 11:57:38.439523   15900 kubeadm.go:586] duration metric: took 42.231622759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 11:57:38.439545   15900 node_conditions.go:102] verifying NodePressure condition ...
	I1020 11:57:38.442832   15900 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 11:57:38.442864   15900 node_conditions.go:123] node cpu capacity is 8
	I1020 11:57:38.442880   15900 node_conditions.go:105] duration metric: took 3.330031ms to run NodePressure ...
	I1020 11:57:38.442898   15900 start.go:241] waiting for startup goroutines ...
	I1020 11:57:38.544694   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:38.690203   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:38.710055   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:38.710157   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:39.045597   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:39.190862   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:39.211512   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:39.211661   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:39.544923   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:39.689577   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:39.709087   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:39.709160   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:40.045133   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:40.189963   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:40.209527   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:40.209629   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:40.544648   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:40.690242   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:40.709195   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:40.709241   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:41.045826   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:41.190115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:41.210666   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:41.210749   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:41.585743   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:41.689489   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:41.708868   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:41.709000   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:42.044980   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:42.190057   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:42.209800   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:42.209843   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:42.544565   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:42.690933   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:42.709853   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:42.709911   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:42.824203   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:57:43.045055   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:43.190277   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:43.209967   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:43.209988   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1020 11:57:43.419595   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:43.419628   15900 retry.go:31] will retry after 24.456993091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:57:43.545905   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:43.690145   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:43.709762   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:43.709882   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:44.045322   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:44.190662   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:44.210947   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:44.211089   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:44.545074   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:44.690115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:44.790429   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:44.790649   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:45.045219   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:45.190586   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:45.209308   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:45.209530   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:45.574992   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:45.732808   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:45.732868   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:45.732912   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:46.044263   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:46.189863   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:46.209338   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:46.209395   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:46.544982   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:46.690016   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:46.709336   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:46.709538   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:47.045269   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:47.190607   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:47.209739   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:47.210057   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:47.545657   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:47.689801   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:47.709467   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:47.709503   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:48.045714   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:48.190151   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:48.210336   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:48.211691   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:48.544786   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:48.690121   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:48.709895   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:48.709919   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:49.045364   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:49.191497   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:49.209083   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:49.209341   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:49.544855   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:49.690448   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:49.710445   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:49.710456   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:50.045257   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:50.190527   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:50.209222   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:50.209417   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:50.545746   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:50.689921   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:50.709589   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:50.709711   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:51.045328   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:51.191117   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:51.209818   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:51.210039   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:51.545731   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:51.690026   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:51.753708   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:51.753873   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:52.045222   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:52.190410   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:52.210075   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:52.210123   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:52.545194   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:52.690440   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:52.790710   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:52.790864   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:53.044660   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:53.189625   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:53.209456   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:53.209505   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:53.545519   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:53.689682   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:53.709060   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:53.709197   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:54.045731   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:54.190243   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:54.211264   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:54.212242   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:54.547842   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:54.691283   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:54.710831   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:54.711283   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:55.045467   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:55.190492   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:55.210252   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:55.210599   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:55.664932   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:55.768247   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:55.768362   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:55.768513   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:56.046439   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:56.189951   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:56.210083   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:56.210177   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:56.545473   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:56.690869   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:56.710256   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:56.710292   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:57.044972   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:57.190359   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:57.210457   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:57.210512   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:57.545276   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:57.690604   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:57.709718   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:57.709754   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:58.044726   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:58.189676   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:58.211995   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:58.212034   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:58.544543   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:58.690115   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:58.709979   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:58.710208   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:59.045021   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:59.189928   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:59.209490   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:57:59.209532   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:59.544498   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:57:59.690807   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:57:59.709429   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:57:59.709582   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:00.045815   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:00.189902   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:00.209095   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:00.209229   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:00.545089   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:00.689957   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:00.709676   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:00.709765   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:01.045832   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:01.190162   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:01.211025   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:01.211134   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:01.545415   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:01.690224   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:01.791529   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:01.791650   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:02.044732   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:02.189836   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:02.209386   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:02.209471   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:02.545259   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:02.691055   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:02.709751   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:02.709927   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:03.044603   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:03.190948   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:03.210085   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:03.210150   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:03.545145   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:03.690564   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:03.709451   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:03.709506   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:04.044722   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:04.190104   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:04.209572   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:04.209641   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:04.544581   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:04.690430   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:04.708823   15900 kapi.go:107] duration metric: took 1m7.002562578s to wait for kubernetes.io/minikube-addons=registry ...
	I1020 11:58:04.708992   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:05.045033   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:05.190646   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:05.210559   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:05.553594   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:05.692239   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:05.741596   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:06.045750   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:06.190143   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:06.210366   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:06.545662   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:06.690292   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:06.709630   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:07.045306   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:07.190514   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:07.209150   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:07.544739   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:07.690687   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:07.709370   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:07.877422   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:08.044803   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:08.189847   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:08.209805   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:08.544244   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:58:08.578229   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:08.578263   15900 retry.go:31] will retry after 32.122770018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:08.690706   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:08.709401   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:09.045120   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:09.189811   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:09.209158   15900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:09.545294   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:09.690146   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:09.709905   15900 kapi.go:107] duration metric: took 1m12.00368565s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1020 11:58:10.044469   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:10.191070   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:10.546608   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:10.690536   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:11.044926   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:11.190036   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:11.545576   15900 kapi.go:107] duration metric: took 1m7.003927915s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1020 11:58:11.579682   15900 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-053741 cluster.
	I1020 11:58:11.592494   15900 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1020 11:58:11.666184   15900 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1020 11:58:11.690120   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:12.191542   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:12.690410   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:13.189946   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:13.690895   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:14.191117   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:14.690840   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:15.190683   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:15.690650   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:16.190478   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:16.690007   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:17.190490   15900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:17.690095   15900 kapi.go:107] duration metric: took 1m19.50347858s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1020 11:58:40.703166   15900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1020 11:58:41.233430   15900 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1020 11:58:41.233547   15900 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1020 11:58:41.235358   15900 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, registry-creds, cloud-spanner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1020 11:58:41.236604   15900 addons.go:514] duration metric: took 1m45.028690198s for enable addons: enabled=[ingress-dns nvidia-device-plugin amd-gpu-device-plugin storage-provisioner registry-creds cloud-spanner default-storageclass metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1020 11:58:41.236647   15900 start.go:246] waiting for cluster config update ...
	I1020 11:58:41.236670   15900 start.go:255] writing updated cluster config ...
	I1020 11:58:41.236937   15900 ssh_runner.go:195] Run: rm -f paused
	I1020 11:58:41.240751   15900 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 11:58:41.244017   15900 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ml6gb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.247687   15900 pod_ready.go:94] pod "coredns-66bc5c9577-ml6gb" is "Ready"
	I1020 11:58:41.247706   15900 pod_ready.go:86] duration metric: took 3.670368ms for pod "coredns-66bc5c9577-ml6gb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.249396   15900 pod_ready.go:83] waiting for pod "etcd-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.252834   15900 pod_ready.go:94] pod "etcd-addons-053741" is "Ready"
	I1020 11:58:41.252858   15900 pod_ready.go:86] duration metric: took 3.444426ms for pod "etcd-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.254584   15900 pod_ready.go:83] waiting for pod "kube-apiserver-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.257917   15900 pod_ready.go:94] pod "kube-apiserver-addons-053741" is "Ready"
	I1020 11:58:41.257940   15900 pod_ready.go:86] duration metric: took 3.337517ms for pod "kube-apiserver-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.259549   15900 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.644244   15900 pod_ready.go:94] pod "kube-controller-manager-addons-053741" is "Ready"
	I1020 11:58:41.644271   15900 pod_ready.go:86] duration metric: took 384.706077ms for pod "kube-controller-manager-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:41.844411   15900 pod_ready.go:83] waiting for pod "kube-proxy-f9l25" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.243951   15900 pod_ready.go:94] pod "kube-proxy-f9l25" is "Ready"
	I1020 11:58:42.243979   15900 pod_ready.go:86] duration metric: took 399.541143ms for pod "kube-proxy-f9l25" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.445467   15900 pod_ready.go:83] waiting for pod "kube-scheduler-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.844731   15900 pod_ready.go:94] pod "kube-scheduler-addons-053741" is "Ready"
	I1020 11:58:42.844756   15900 pod_ready.go:86] duration metric: took 399.262918ms for pod "kube-scheduler-addons-053741" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 11:58:42.844766   15900 pod_ready.go:40] duration metric: took 1.603974009s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 11:58:42.888734   15900 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 11:58:42.890440   15900 out.go:179] * Done! kubectl is now configured to use "addons-053741" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 11:58:43 addons-053741 crio[771]: time="2025-10-20T11:58:43.740981802Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.023676939Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=531eae69-3829-4615-8da3-1d919992a6ee name=/runtime.v1.ImageService/PullImage
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.024264808Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7b19d581-5aa5-4a92-8979-ad2857152997 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.025555723Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=71826a8e-7103-46ca-9b53-22ff1ff4cd0e name=/runtime.v1.ImageService/ImageStatus
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.029241415Z" level=info msg="Creating container: default/busybox/busybox" id=f461787b-0373-4dc3-823f-2e8acd6b99eb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.02936107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.034886011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.035514374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.06387769Z" level=info msg="Created container d9d7c36ffbac94950781dfc7b67424f051da0db48fbad2cb07a5278b6d39418d: default/busybox/busybox" id=f461787b-0373-4dc3-823f-2e8acd6b99eb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.064449795Z" level=info msg="Starting container: d9d7c36ffbac94950781dfc7b67424f051da0db48fbad2cb07a5278b6d39418d" id=6c181507-7a75-4abe-ada3-5193ff6806c1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 11:58:45 addons-053741 crio[771]: time="2025-10-20T11:58:45.066158133Z" level=info msg="Started container" PID=6573 containerID=d9d7c36ffbac94950781dfc7b67424f051da0db48fbad2cb07a5278b6d39418d description=default/busybox/busybox id=6c181507-7a75-4abe-ada3-5193ff6806c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8de0161b1dba1591cfd777513b553cdb5ebca20b31ab7553b73dd019be7f5f33
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.06755777Z" level=info msg="Removing container: edf1b3b2ae4b07d3bd7c10de23340b8ddc085ebbb1b5790c915c3dc7d5feac3c" id=20296309-92bc-4356-b3ed-0eee2a20b1de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.074340172Z" level=info msg="Removed container edf1b3b2ae4b07d3bd7c10de23340b8ddc085ebbb1b5790c915c3dc7d5feac3c: gcp-auth/gcp-auth-certs-patch-49pqf/patch" id=20296309-92bc-4356-b3ed-0eee2a20b1de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.075763807Z" level=info msg="Removing container: 60b88b961937f34610c696a2fa9d9bf5c0fae512df4e85cb0bc4e253bbca7e52" id=128e46c8-bf5e-471a-9b75-404c3a2dd8e0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.082387931Z" level=info msg="Removed container 60b88b961937f34610c696a2fa9d9bf5c0fae512df4e85cb0bc4e253bbca7e52: gcp-auth/gcp-auth-certs-create-2c4sm/create" id=128e46c8-bf5e-471a-9b75-404c3a2dd8e0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.08472571Z" level=info msg="Stopping pod sandbox: 1c4b1f85e70ffb8cb0b23127b9ac9646887fb7dfa63b396c344e11f9d2bfd930" id=4199e9a6-12b0-4a6f-9511-9fc3fa245e53 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.084766257Z" level=info msg="Stopped pod sandbox (already stopped): 1c4b1f85e70ffb8cb0b23127b9ac9646887fb7dfa63b396c344e11f9d2bfd930" id=4199e9a6-12b0-4a6f-9511-9fc3fa245e53 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.085198814Z" level=info msg="Removing pod sandbox: 1c4b1f85e70ffb8cb0b23127b9ac9646887fb7dfa63b396c344e11f9d2bfd930" id=87cc2c6a-ecc4-46f2-a262-2997f54abd76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.087962689Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.088029381Z" level=info msg="Removed pod sandbox: 1c4b1f85e70ffb8cb0b23127b9ac9646887fb7dfa63b396c344e11f9d2bfd930" id=87cc2c6a-ecc4-46f2-a262-2997f54abd76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.088543483Z" level=info msg="Stopping pod sandbox: bf170d0c6b449ac3d79f5df2900f34476ea840e22f10eb38977f3ac31303f824" id=08244b1c-8cbb-4845-9802-62292c7374f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.088595531Z" level=info msg="Stopped pod sandbox (already stopped): bf170d0c6b449ac3d79f5df2900f34476ea840e22f10eb38977f3ac31303f824" id=08244b1c-8cbb-4845-9802-62292c7374f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.088882009Z" level=info msg="Removing pod sandbox: bf170d0c6b449ac3d79f5df2900f34476ea840e22f10eb38977f3ac31303f824" id=a701821e-803b-4b62-8d87-6bd4309d6a1e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.091607821Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 11:58:50 addons-053741 crio[771]: time="2025-10-20T11:58:50.091658394Z" level=info msg="Removed pod sandbox: bf170d0c6b449ac3d79f5df2900f34476ea840e22f10eb38977f3ac31303f824" id=a701821e-803b-4b62-8d87-6bd4309d6a1e name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d9d7c36ffbac9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   8de0161b1dba1       busybox                                     default
	6df2005c3dce4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          37 seconds ago       Running             csi-snapshotter                          0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	02ac8e9a477c9       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          38 seconds ago       Running             csi-provisioner                          0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	2d3daf84e6c96       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            39 seconds ago       Running             liveness-probe                           0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	dd4bb1b4f7046       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           39 seconds ago       Running             hostpath                                 0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	2edf50baac6ba       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            40 seconds ago       Running             gadget                                   0                   7bc7d0b7faab4       gadget-bb9nf                                gadget
	6b0cf0f679a40       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                43 seconds ago       Running             node-driver-registrar                    0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	7a2686ee3a166       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 43 seconds ago       Running             gcp-auth                                 0                   c2a8ddbe16025       gcp-auth-78565c9fb4-6zzdw                   gcp-auth
	9a70e76cd0e96       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             45 seconds ago       Running             controller                               0                   39eac987f8194       ingress-nginx-controller-675c5ddd98-wwnpt   ingress-nginx
	c4b5fa9dcee14       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     49 seconds ago       Running             amd-gpu-device-plugin                    0                   6ab594407f7bf       amd-gpu-device-plugin-pcd5k                 kube-system
	b5d282533aea8       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              49 seconds ago       Running             csi-resizer                              0                   f88b926aa61e0       csi-hostpath-resizer-0                      kube-system
	c6bc622719c6a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              50 seconds ago       Running             registry-proxy                           0                   c8608a162b543       registry-proxy-wfdh9                        kube-system
	570ba942e1d25       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             52 seconds ago       Exited              patch                                    2                   dc339a9258caa       ingress-nginx-admission-patch-4krq9         ingress-nginx
	28a9df06a407b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   52 seconds ago       Running             csi-external-health-monitor-controller   0                   6ac12bc79833f       csi-hostpathplugin-2k9f8                    kube-system
	8ee09292e70de       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     53 seconds ago       Running             nvidia-device-plugin-ctr                 0                   42795724b8454       nvidia-device-plugin-daemonset-p47g8        kube-system
	cb34c9f1c580c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   7c8d52ca183a2       snapshot-controller-7d9fbc56b8-2ztzp        kube-system
	51a80cd6bc076       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   6b128ea89ebfa       snapshot-controller-7d9fbc56b8-stswk        kube-system
	d9a30b9299a6e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   ea0b994c8b79f       csi-hostpath-attacher-0                     kube-system
	9fd6582d19e66       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   1f2900ef06fbe       yakd-dashboard-5ff678cb9-npcnf              yakd-dashboard
	fbcca8cc89164       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   695787961d8e9       ingress-nginx-admission-create-jbfb9        ingress-nginx
	360ec23af69c7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   23e34dd134d63       local-path-provisioner-648f6765c9-ndz4w     local-path-storage
	9370bc1dd29d3       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   92c6654f1a567       registry-6b586f9694-gb2mv                   kube-system
	307bd8f9af404       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   3f13cf3aca3d2       cloud-spanner-emulator-86bd5cbb97-xcpnk     default
	fa80ac0b9cd9c       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   94166c3150192       metrics-server-85b7d694d7-5b2cn             kube-system
	67371a5015804       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   35e61012c6c5a       kube-ingress-dns-minikube                   kube-system
	0f15b4706c771       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   3afdcb527b7a6       coredns-66bc5c9577-ml6gb                    kube-system
	b5c7f9c4b30eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   4f3dd3c7dfc62       storage-provisioner                         kube-system
	52948a7351d92       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   175ebef6509ca       kindnet-5mww7                               kube-system
	daef0b8bb4e24       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   31ade7fbe9491       kube-proxy-f9l25                            kube-system
	3638400d972a3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   2680c662916a1       kube-controller-manager-addons-053741       kube-system
	fac7a84a8cd03       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   5224b616bc140       kube-apiserver-addons-053741                kube-system
	a165b7f5e69ec       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   f806d78362563       kube-scheduler-addons-053741                kube-system
	d6564015bbe91       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   7c40925ee9a35       etcd-addons-053741                          kube-system
	
	
	==> coredns [0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a] <==
	[INFO] 10.244.0.13:37643 - 7510 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.004554435s
	[INFO] 10.244.0.13:60502 - 50067 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000078663s
	[INFO] 10.244.0.13:60502 - 50509 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000115681s
	[INFO] 10.244.0.13:42980 - 25530 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000046814s
	[INFO] 10.244.0.13:42980 - 25091 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069201s
	[INFO] 10.244.0.13:38148 - 47609 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000064612s
	[INFO] 10.244.0.13:38148 - 47133 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000099909s
	[INFO] 10.244.0.13:35475 - 12126 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091704s
	[INFO] 10.244.0.13:35475 - 12294 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000120189s
	[INFO] 10.244.0.21:35635 - 25511 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216326s
	[INFO] 10.244.0.21:48496 - 32709 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000269033s
	[INFO] 10.244.0.21:36932 - 55629 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144612s
	[INFO] 10.244.0.21:52637 - 57991 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196948s
	[INFO] 10.244.0.21:34182 - 5907 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136384s
	[INFO] 10.244.0.21:39100 - 25573 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000191087s
	[INFO] 10.244.0.21:35248 - 15611 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003408549s
	[INFO] 10.244.0.21:37573 - 11897 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003447009s
	[INFO] 10.244.0.21:41002 - 50007 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005507369s
	[INFO] 10.244.0.21:58230 - 22650 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00668097s
	[INFO] 10.244.0.21:60142 - 15436 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004508968s
	[INFO] 10.244.0.21:46524 - 36626 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007043292s
	[INFO] 10.244.0.21:48679 - 11884 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004402343s
	[INFO] 10.244.0.21:58088 - 62121 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005312713s
	[INFO] 10.244.0.21:33011 - 23461 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000873015s
	[INFO] 10.244.0.21:52797 - 42213 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001245242s
	
	
	==> describe nodes <==
	Name:               addons-053741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-053741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=addons-053741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T11_56_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-053741
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-053741"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 11:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-053741
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 11:58:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 11:58:52 +0000   Mon, 20 Oct 2025 11:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 11:58:52 +0000   Mon, 20 Oct 2025 11:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 11:58:52 +0000   Mon, 20 Oct 2025 11:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 11:58:52 +0000   Mon, 20 Oct 2025 11:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-053741
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                14a15a42-128d-4aa1-9f59-56e441c974e3
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-xcpnk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  gadget                      gadget-bb9nf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  gcp-auth                    gcp-auth-78565c9fb4-6zzdw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-wwnpt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         117s
	  kube-system                 amd-gpu-device-plugin-pcd5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 coredns-66bc5c9577-ml6gb                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     118s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-2k9f8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 etcd-addons-053741                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-5mww7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-addons-053741                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-addons-053741        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-f9l25                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-addons-053741                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 metrics-server-85b7d694d7-5b2cn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         117s
	  kube-system                 nvidia-device-plugin-daemonset-p47g8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 registry-6b586f9694-gb2mv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 registry-creds-764b6fb674-6kcjl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-proxy-wfdh9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 snapshot-controller-7d9fbc56b8-2ztzp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 snapshot-controller-7d9fbc56b8-stswk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  local-path-storage          local-path-provisioner-648f6765c9-ndz4w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-npcnf               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 117s  kube-proxy       
	  Normal  Starting                 2m4s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s  kubelet          Node addons-053741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s  kubelet          Node addons-053741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s  kubelet          Node addons-053741 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m    node-controller  Node addons-053741 event: Registered Node addons-053741 in Controller
	  Normal  NodeReady                78s   kubelet          Node addons-053741 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct20 11:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001634] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400706] i8042: Warning: Keylock active
	[  +0.010170] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004240] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000658] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000748] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000685] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000683] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000665] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000731] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000814] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.504819] block sda: the capability attribute has been deprecated.
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5] <==
	{"level":"warn","ts":"2025-10-20T11:56:47.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.285604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.291650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.299657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.306808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.313012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.320643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.341007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.348032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.354567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:47.400814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:58.761849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:56:58.768884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.838348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.844618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.862175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:24.868483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T11:57:45.573103Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.040079ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040758640199966 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" mod_revision:964 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" value_size:2180 >> failure:<request_range:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T11:57:45.573229Z","caller":"traceutil/trace.go:172","msg":"trace[635650607] transaction","detail":"{read_only:false; response_revision:966; number_of_response:1; }","duration":"218.297006ms","start":"2025-10-20T11:57:45.354914Z","end":"2025-10-20T11:57:45.573211Z","steps":["trace[635650607] 'process raft request'  (duration: 93.598699ms)","trace[635650607] 'compare'  (duration: 123.95377ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T11:57:55.468836Z","caller":"traceutil/trace.go:172","msg":"trace[930065237] transaction","detail":"{read_only:false; response_revision:1067; number_of_response:1; }","duration":"123.362794ms","start":"2025-10-20T11:57:55.345451Z","end":"2025-10-20T11:57:55.468814Z","steps":["trace[930065237] 'process raft request'  (duration: 123.223403ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T11:57:55.662219Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.2383ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:57:55.662291Z","caller":"traceutil/trace.go:172","msg":"trace[1496454679] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1067; }","duration":"121.320219ms","start":"2025-10-20T11:57:55.540955Z","end":"2025-10-20T11:57:55.662275Z","steps":["trace[1496454679] 'agreement among raft nodes before linearized reading'  (duration: 40.055541ms)","trace[1496454679] 'range keys from in-memory index tree'  (duration: 81.162988ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T11:57:55.662405Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.239247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:57:55.662469Z","caller":"traceutil/trace.go:172","msg":"trace[2060499466] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1068; }","duration":"119.306393ms","start":"2025-10-20T11:57:55.543150Z","end":"2025-10-20T11:57:55.662456Z","steps":["trace[2060499466] 'agreement among raft nodes before linearized reading'  (duration: 119.21031ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T11:57:55.662492Z","caller":"traceutil/trace.go:172","msg":"trace[1136934314] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"185.85343ms","start":"2025-10-20T11:57:55.476626Z","end":"2025-10-20T11:57:55.662480Z","steps":["trace[1136934314] 'process raft request'  (duration: 104.409865ms)","trace[1136934314] 'compare'  (duration: 81.136314ms)"],"step_count":2}
	
	
	==> gcp-auth [7a2686ee3a16603af0133e7b3765feeb36a76327a0c538482061c10fd4656b6b] <==
	2025/10/20 11:58:10 GCP Auth Webhook started!
	2025/10/20 11:58:43 Ready to marshal response ...
	2025/10/20 11:58:43 Ready to write response ...
	2025/10/20 11:58:43 Ready to marshal response ...
	2025/10/20 11:58:43 Ready to write response ...
	2025/10/20 11:58:43 Ready to marshal response ...
	2025/10/20 11:58:43 Ready to write response ...
	
	
	==> kernel <==
	 11:58:54 up 41 min,  0 user,  load average: 2.43, 1.45, 0.58
	Linux addons-053741 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238] <==
	I1020 11:56:56.726881       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 11:56:56.727643       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 11:57:26.627272       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 11:57:26.727918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 11:57:26.730293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 11:57:26.737766       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1020 11:57:27.927585       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 11:57:27.927615       1 metrics.go:72] Registering metrics
	I1020 11:57:27.927710       1 controller.go:711] "Syncing nftables rules"
	I1020 11:57:36.584925       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:57:36.584984       1 main.go:301] handling current node
	I1020 11:57:46.583080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:57:46.583143       1 main.go:301] handling current node
	I1020 11:57:56.582631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:57:56.582664       1 main.go:301] handling current node
	I1020 11:58:06.583011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:58:06.583056       1 main.go:301] handling current node
	I1020 11:58:16.583080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:58:16.583126       1 main.go:301] handling current node
	I1020 11:58:26.582969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:58:26.583007       1 main.go:301] handling current node
	I1020 11:58:36.583904       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:58:36.583936       1 main.go:301] handling current node
	I1020 11:58:46.582215       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 11:58:46.582244       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef] <==
	I1020 11:56:58.352950       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1020 11:56:58.761810       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:56:58.768846       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1020 11:57:04.477758       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.210.5"}
	W1020 11:57:24.838219       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:24.844646       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:24.862062       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:24.868430       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1020 11:57:36.932036       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.932080       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:36.932197       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.932229       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:36.955319       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.955421       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:36.957330       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.210.5:443: connect: connection refused
	E1020 11:57:36.957368       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.210.5:443: connect: connection refused" logger="UnhandledError"
	E1020 11:57:45.341343       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.109.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.109.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.109.199:443: connect: connection refused" logger="UnhandledError"
	W1020 11:57:45.341430       1 handler_proxy.go:99] no RequestInfo found in the context
	E1020 11:57:45.341484       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1020 11:57:45.354466       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1020 11:58:52.556674       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36796: use of closed network connection
	E1020 11:58:52.705512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36826: use of closed network connection
	
	
	==> kube-controller-manager [3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b] <==
	I1020 11:56:54.824144       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 11:56:54.824217       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 11:56:54.824240       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 11:56:54.824300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 11:56:54.824346       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 11:56:54.825381       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 11:56:54.825432       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 11:56:54.825519       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 11:56:54.825532       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 11:56:54.828102       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 11:56:54.828156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 11:56:54.828250       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 11:56:54.828267       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 11:56:54.844852       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 11:56:54.844861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 11:56:54.844889       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 11:56:54.844899       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1020 11:57:24.832728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1020 11:57:24.832879       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1020 11:57:24.832912       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1020 11:57:24.853733       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1020 11:57:24.856924       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1020 11:57:24.933517       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 11:57:24.957957       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 11:57:39.780928       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4] <==
	I1020 11:56:56.083297       1 server_linux.go:53] "Using iptables proxy"
	I1020 11:56:56.139284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 11:56:56.242788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 11:56:56.242833       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 11:56:56.242950       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 11:56:56.315892       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 11:56:56.315949       1 server_linux.go:132] "Using iptables Proxier"
	I1020 11:56:56.342287       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 11:56:56.348572       1 server.go:527] "Version info" version="v1.34.1"
	I1020 11:56:56.348670       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 11:56:56.350194       1 config.go:200] "Starting service config controller"
	I1020 11:56:56.350271       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 11:56:56.350376       1 config.go:309] "Starting node config controller"
	I1020 11:56:56.351116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 11:56:56.351183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 11:56:56.350972       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 11:56:56.351237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 11:56:56.350960       1 config.go:106] "Starting endpoint slice config controller"
	I1020 11:56:56.351296       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 11:56:56.456284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 11:56:56.461683       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 11:56:56.462323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b] <==
	E1020 11:56:47.969436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 11:56:47.969500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 11:56:47.969528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 11:56:47.969571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 11:56:47.969578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 11:56:47.969595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 11:56:47.969635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 11:56:47.969694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 11:56:47.969703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 11:56:47.969709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 11:56:47.969747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 11:56:47.970438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 11:56:47.970482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 11:56:47.970517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 11:56:47.970662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 11:56:47.970953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 11:56:47.971018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 11:56:48.777445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 11:56:48.789508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 11:56:48.923837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 11:56:48.969717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 11:56:48.992751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 11:56:49.045716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 11:56:49.193051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1020 11:56:51.767983       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 11:58:04 addons-053741 kubelet[1310]: I1020 11:58:04.341961    1310 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc339a9258caa93780f19b6bcffed7b29a44cf7c17c859e55701bfe02437fc42"
	Oct 20 11:58:04 addons-053741 kubelet[1310]: I1020 11:58:04.345098    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wfdh9" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 11:58:04 addons-053741 kubelet[1310]: I1020 11:58:04.355012    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=40.501768799 podStartE2EDuration="1m7.354994361s" podCreationTimestamp="2025-10-20 11:56:57 +0000 UTC" firstStartedPulling="2025-10-20 11:57:37.398685135 +0000 UTC m=+47.406664158" lastFinishedPulling="2025-10-20 11:58:04.251910697 +0000 UTC m=+74.259889720" observedRunningTime="2025-10-20 11:58:04.354237466 +0000 UTC m=+74.362216494" watchObservedRunningTime="2025-10-20 11:58:04.354994361 +0000 UTC m=+74.362973388"
	Oct 20 11:58:04 addons-053741 kubelet[1310]: I1020 11:58:04.364327    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-wfdh9" podStartSLOduration=2.362230985 podStartE2EDuration="28.36430653s" podCreationTimestamp="2025-10-20 11:57:36 +0000 UTC" firstStartedPulling="2025-10-20 11:57:37.397903426 +0000 UTC m=+47.405882449" lastFinishedPulling="2025-10-20 11:58:03.399978974 +0000 UTC m=+73.407957994" observedRunningTime="2025-10-20 11:58:04.363892706 +0000 UTC m=+74.371871734" watchObservedRunningTime="2025-10-20 11:58:04.36430653 +0000 UTC m=+74.372285537"
	Oct 20 11:58:05 addons-053741 kubelet[1310]: I1020 11:58:05.351836    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pcd5k" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 11:58:05 addons-053741 kubelet[1310]: I1020 11:58:05.352556    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wfdh9" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 11:58:05 addons-053741 kubelet[1310]: I1020 11:58:05.369662    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-pcd5k" podStartSLOduration=1.869559017 podStartE2EDuration="29.36963982s" podCreationTimestamp="2025-10-20 11:57:36 +0000 UTC" firstStartedPulling="2025-10-20 11:57:37.399992046 +0000 UTC m=+47.407971054" lastFinishedPulling="2025-10-20 11:58:04.90007283 +0000 UTC m=+74.908051857" observedRunningTime="2025-10-20 11:58:05.368223334 +0000 UTC m=+75.376202363" watchObservedRunningTime="2025-10-20 11:58:05.36963982 +0000 UTC m=+75.377618848"
	Oct 20 11:58:06 addons-053741 kubelet[1310]: I1020 11:58:06.355139    1310 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pcd5k" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 11:58:08 addons-053741 kubelet[1310]: E1020 11:58:08.781617    1310 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 20 11:58:08 addons-053741 kubelet[1310]: E1020 11:58:08.781699    1310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3-gcr-creds podName:9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3 nodeName:}" failed. No retries permitted until 2025-10-20 11:58:40.781685096 +0000 UTC m=+110.789664107 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3-gcr-creds") pod "registry-creds-764b6fb674-6kcjl" (UID: "9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3") : secret "registry-creds-gcr" not found
	Oct 20 11:58:09 addons-053741 kubelet[1310]: I1020 11:58:09.375626    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-wwnpt" podStartSLOduration=56.685921482 podStartE2EDuration="1m12.375608144s" podCreationTimestamp="2025-10-20 11:56:57 +0000 UTC" firstStartedPulling="2025-10-20 11:57:52.910729771 +0000 UTC m=+62.918708779" lastFinishedPulling="2025-10-20 11:58:08.600416432 +0000 UTC m=+78.608395441" observedRunningTime="2025-10-20 11:58:09.37539802 +0000 UTC m=+79.383377048" watchObservedRunningTime="2025-10-20 11:58:09.375608144 +0000 UTC m=+79.383587174"
	Oct 20 11:58:11 addons-053741 kubelet[1310]: I1020 11:58:11.397033    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-6zzdw" podStartSLOduration=49.812110311 podStartE2EDuration="1m7.397013586s" podCreationTimestamp="2025-10-20 11:57:04 +0000 UTC" firstStartedPulling="2025-10-20 11:57:52.912686026 +0000 UTC m=+62.920665039" lastFinishedPulling="2025-10-20 11:58:10.497589291 +0000 UTC m=+80.505568314" observedRunningTime="2025-10-20 11:58:11.396752598 +0000 UTC m=+81.404731627" watchObservedRunningTime="2025-10-20 11:58:11.397013586 +0000 UTC m=+81.404992615"
	Oct 20 11:58:14 addons-053741 kubelet[1310]: I1020 11:58:14.399964    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-bb9nf" podStartSLOduration=65.543238313 podStartE2EDuration="1m17.399943127s" podCreationTimestamp="2025-10-20 11:56:57 +0000 UTC" firstStartedPulling="2025-10-20 11:58:01.797734778 +0000 UTC m=+71.805713786" lastFinishedPulling="2025-10-20 11:58:13.65443959 +0000 UTC m=+83.662418600" observedRunningTime="2025-10-20 11:58:14.399072334 +0000 UTC m=+84.407051364" watchObservedRunningTime="2025-10-20 11:58:14.399943127 +0000 UTC m=+84.407922156"
	Oct 20 11:58:15 addons-053741 kubelet[1310]: I1020 11:58:15.127075    1310 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 20 11:58:15 addons-053741 kubelet[1310]: I1020 11:58:15.127115    1310 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 20 11:58:17 addons-053741 kubelet[1310]: I1020 11:58:17.452556    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-2k9f8" podStartSLOduration=2.109570361 podStartE2EDuration="41.452535603s" podCreationTimestamp="2025-10-20 11:57:36 +0000 UTC" firstStartedPulling="2025-10-20 11:57:37.394689235 +0000 UTC m=+47.402668241" lastFinishedPulling="2025-10-20 11:58:16.737654463 +0000 UTC m=+86.745633483" observedRunningTime="2025-10-20 11:58:17.451282727 +0000 UTC m=+87.459261753" watchObservedRunningTime="2025-10-20 11:58:17.452535603 +0000 UTC m=+87.460514630"
	Oct 20 11:58:26 addons-053741 kubelet[1310]: I1020 11:58:26.074925    1310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0f5294-2222-4f10-a404-1e398b22ce4a" path="/var/lib/kubelet/pods/be0f5294-2222-4f10-a404-1e398b22ce4a/volumes"
	Oct 20 11:58:34 addons-053741 kubelet[1310]: I1020 11:58:34.073926    1310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0a1e2f-65fd-4c50-bb79-c174a3e8a626" path="/var/lib/kubelet/pods/7b0a1e2f-65fd-4c50-bb79-c174a3e8a626/volumes"
	Oct 20 11:58:40 addons-053741 kubelet[1310]: E1020 11:58:40.819949    1310 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 20 11:58:40 addons-053741 kubelet[1310]: E1020 11:58:40.820061    1310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3-gcr-creds podName:9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3 nodeName:}" failed. No retries permitted until 2025-10-20 11:59:44.820039784 +0000 UTC m=+174.828018803 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3-gcr-creds") pod "registry-creds-764b6fb674-6kcjl" (UID: "9d8f1b02-88e6-4cc4-bfa1-d5b3fb4019d3") : secret "registry-creds-gcr" not found
	Oct 20 11:58:43 addons-053741 kubelet[1310]: I1020 11:58:43.540558    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4d37d1ce-93e9-4ffd-ae7d-4730ac0bf5cf-gcp-creds\") pod \"busybox\" (UID: \"4d37d1ce-93e9-4ffd-ae7d-4730ac0bf5cf\") " pod="default/busybox"
	Oct 20 11:58:43 addons-053741 kubelet[1310]: I1020 11:58:43.540612    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzr7k\" (UniqueName: \"kubernetes.io/projected/4d37d1ce-93e9-4ffd-ae7d-4730ac0bf5cf-kube-api-access-mzr7k\") pod \"busybox\" (UID: \"4d37d1ce-93e9-4ffd-ae7d-4730ac0bf5cf\") " pod="default/busybox"
	Oct 20 11:58:45 addons-053741 kubelet[1310]: I1020 11:58:45.524913    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.239045094 podStartE2EDuration="2.524894919s" podCreationTimestamp="2025-10-20 11:58:43 +0000 UTC" firstStartedPulling="2025-10-20 11:58:43.739168775 +0000 UTC m=+113.747147785" lastFinishedPulling="2025-10-20 11:58:45.025018599 +0000 UTC m=+115.032997610" observedRunningTime="2025-10-20 11:58:45.524615196 +0000 UTC m=+115.532594249" watchObservedRunningTime="2025-10-20 11:58:45.524894919 +0000 UTC m=+115.532873947"
	Oct 20 11:58:50 addons-053741 kubelet[1310]: I1020 11:58:50.066239    1310 scope.go:117] "RemoveContainer" containerID="edf1b3b2ae4b07d3bd7c10de23340b8ddc085ebbb1b5790c915c3dc7d5feac3c"
	Oct 20 11:58:50 addons-053741 kubelet[1310]: I1020 11:58:50.074534    1310 scope.go:117] "RemoveContainer" containerID="60b88b961937f34610c696a2fa9d9bf5c0fae512df4e85cb0bc4e253bbca7e52"
	
	
	==> storage-provisioner [b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04] <==
	W1020 11:58:29.885888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:31.888803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:31.892736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:33.895857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:33.899433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:35.901949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:35.906976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:37.909561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:37.913225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:39.916238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:39.921480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:41.924265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:41.927869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:43.931344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:43.935807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:45.938951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:45.943473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:47.946693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:47.950672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:49.953219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:49.958538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:51.961307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:51.965200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:53.968752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 11:58:53.972564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-053741 -n addons-053741
helpers_test.go:269: (dbg) Run:  kubectl --context addons-053741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9 registry-creds-764b6fb674-6kcjl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-053741 describe pod ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9 registry-creds-764b6fb674-6kcjl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-053741 describe pod ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9 registry-creds-764b6fb674-6kcjl: exit status 1 (60.896352ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jbfb9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4krq9" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6kcjl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-053741 describe pod ingress-nginx-admission-create-jbfb9 ingress-nginx-admission-patch-4krq9 registry-creds-764b6fb674-6kcjl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable headlamp --alsologtostderr -v=1: exit status 11 (244.968113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:58:55.240019   25310 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:58:55.240305   25310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:55.240316   25310 out.go:374] Setting ErrFile to fd 2...
	I1020 11:58:55.240321   25310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:55.240515   25310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:58:55.240764   25310 mustload.go:65] Loading cluster: addons-053741
	I1020 11:58:55.241137   25310 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:55.241154   25310 addons.go:606] checking whether the cluster is paused
	I1020 11:58:55.241235   25310 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:55.241248   25310 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:58:55.241597   25310 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:58:55.260249   25310 ssh_runner.go:195] Run: systemctl --version
	I1020 11:58:55.260301   25310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:58:55.280490   25310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:58:55.381532   25310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:58:55.381632   25310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:58:55.414040   25310 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:58:55.414069   25310 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:58:55.414075   25310 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:58:55.414079   25310 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:58:55.414082   25310 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:58:55.414086   25310 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:58:55.414089   25310 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:58:55.414092   25310 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:58:55.414094   25310 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:58:55.414103   25310 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:58:55.414105   25310 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:58:55.414108   25310 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:58:55.414110   25310 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:58:55.414112   25310 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:58:55.414115   25310 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:58:55.414126   25310 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:58:55.414133   25310 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:58:55.414136   25310 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:58:55.414139   25310 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:58:55.414141   25310 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:58:55.414144   25310 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:58:55.414146   25310 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:58:55.414148   25310 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:58:55.414150   25310 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:58:55.414153   25310 cri.go:89] found id: ""
	I1020 11:58:55.414198   25310 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:58:55.430918   25310 out.go:203] 
	W1020 11:58:55.432277   25310 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:58:55.432296   25310 out.go:285] * 
	* 
	W1020 11:58:55.435227   25310 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:58:55.436761   25310 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-xcpnk" [cc9e62d3-fa90-4660-aa55-aa789be56991] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003666307s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (232.900816ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:12.094030   27673 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:12.094302   27673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:12.094313   27673 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:12.094317   27673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:12.094520   27673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:12.094761   27673 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:12.095139   27673 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:12.095156   27673 addons.go:606] checking whether the cluster is paused
	I1020 11:59:12.095246   27673 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:12.095257   27673 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:12.095626   27673 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:12.113486   27673 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:12.113554   27673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:12.132568   27673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:12.231561   27673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:12.231643   27673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:12.261204   27673 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:12.261227   27673 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:12.261232   27673 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:12.261236   27673 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:12.261239   27673 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:12.261244   27673 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:12.261256   27673 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:12.261260   27673 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:12.261265   27673 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:12.261272   27673 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:12.261276   27673 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:12.261280   27673 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:12.261284   27673 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:12.261289   27673 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:12.261293   27673 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:12.261300   27673 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:12.261308   27673 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:12.261313   27673 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:12.261316   27673 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:12.261320   27673 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:12.261324   27673 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:12.261327   27673 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:12.261338   27673 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:12.261341   27673 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:12.261345   27673 cri.go:89] found id: ""
	I1020 11:59:12.261390   27673 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:12.275360   27673 out.go:203] 
	W1020 11:59:12.276903   27673 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:12.276923   27673 out.go:285] * 
	* 
	W1020 11:59:12.279965   27673 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:12.281581   27673 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (7.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-053741 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-053741 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/10/20 11:59:06 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053741 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [efb77bc5-bb64-4f02-ad1e-3921cc2c8156] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [efb77bc5-bb64-4f02-ad1e-3921cc2c8156] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [efb77bc5-bb64-4f02-ad1e-3921cc2c8156] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 2.003621088s
addons_test.go:967: (dbg) Run:  kubectl --context addons-053741 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 ssh "cat /opt/local-path-provisioner/pvc-cd57e4c8-185c-4547-ba57-5ad9deb884da_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-053741 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-053741 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (291.770814ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:10.489287   27342 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:10.489605   27342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:10.489618   27342 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:10.489626   27342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:10.489927   27342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:10.490271   27342 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:10.490602   27342 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:10.490618   27342 addons.go:606] checking whether the cluster is paused
	I1020 11:59:10.490695   27342 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:10.490704   27342 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:10.491208   27342 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:10.513512   27342 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:10.513559   27342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:10.538317   27342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:10.655188   27342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:10.655300   27342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:10.694861   27342 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:10.694881   27342 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:10.694884   27342 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:10.694887   27342 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:10.694891   27342 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:10.694895   27342 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:10.694899   27342 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:10.694902   27342 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:10.694906   27342 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:10.694919   27342 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:10.694923   27342 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:10.694926   27342 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:10.694930   27342 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:10.694934   27342 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:10.694944   27342 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:10.694950   27342 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:10.694954   27342 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:10.694958   27342 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:10.694961   27342 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:10.694965   27342 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:10.694968   27342 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:10.694972   27342 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:10.694975   27342 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:10.694980   27342 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:10.694984   27342 cri.go:89] found id: ""
	I1020 11:59:10.695026   27342 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:10.712734   27342 out.go:203] 
	W1020 11:59:10.714622   27342 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:10.714644   27342 out.go:285] * 
	* 
	W1020 11:59:10.718324   27342 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:10.720619   27342 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (7.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-p47g8" [25f655f5-d10c-4bb9-ba62-e9c4612d119b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002978323s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (240.338232ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:58:57.990931   25462 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:58:57.991219   25462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:57.991228   25462 out.go:374] Setting ErrFile to fd 2...
	I1020 11:58:57.991233   25462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:57.991445   25462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:58:57.991713   25462 mustload.go:65] Loading cluster: addons-053741
	I1020 11:58:57.992099   25462 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:57.992117   25462 addons.go:606] checking whether the cluster is paused
	I1020 11:58:57.992200   25462 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:57.992212   25462 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:58:57.992607   25462 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:58:58.012511   25462 ssh_runner.go:195] Run: systemctl --version
	I1020 11:58:58.012580   25462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:58:58.030847   25462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:58:58.129701   25462 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:58:58.129785   25462 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:58:58.160846   25462 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:58:58.160882   25462 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:58:58.160887   25462 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:58:58.160892   25462 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:58:58.160897   25462 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:58:58.160902   25462 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:58:58.160905   25462 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:58:58.160909   25462 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:58:58.160913   25462 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:58:58.160924   25462 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:58:58.160928   25462 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:58:58.160932   25462 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:58:58.160936   25462 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:58:58.160939   25462 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:58:58.160943   25462 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:58:58.160952   25462 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:58:58.160956   25462 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:58:58.160968   25462 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:58:58.160972   25462 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:58:58.160976   25462 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:58:58.160981   25462 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:58:58.160985   25462 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:58:58.160988   25462 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:58:58.160996   25462 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:58:58.161000   25462 cri.go:89] found id: ""
	I1020 11:58:58.161073   25462 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:58:58.175611   25462 out.go:203] 
	W1020 11:58:58.177293   25462 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:58:58.177313   25462 out.go:285] * 
	* 
	W1020 11:58:58.180250   25462 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:58:58.181738   25462 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-npcnf" [c47e2bdf-e407-4723-b649-1849b7b6ca8b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.023301071s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable yakd --alsologtostderr -v=1: exit status 11 (236.715251ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:59:16.266459   27889 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:59:16.266615   27889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:16.266624   27889 out.go:374] Setting ErrFile to fd 2...
	I1020 11:59:16.266627   27889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:59:16.266873   27889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:59:16.267117   27889 mustload.go:65] Loading cluster: addons-053741
	I1020 11:59:16.267487   27889 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:16.267504   27889 addons.go:606] checking whether the cluster is paused
	I1020 11:59:16.267614   27889 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:59:16.267627   27889 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:59:16.268004   27889 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:59:16.286953   27889 ssh_runner.go:195] Run: systemctl --version
	I1020 11:59:16.287004   27889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:59:16.304819   27889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:59:16.403500   27889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:59:16.403565   27889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:59:16.432792   27889 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:59:16.432815   27889 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:59:16.432820   27889 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:59:16.432825   27889 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:59:16.432828   27889 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:59:16.432832   27889 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:59:16.432837   27889 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:59:16.432840   27889 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:59:16.432844   27889 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:59:16.432852   27889 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:59:16.432856   27889 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:59:16.432860   27889 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:59:16.432874   27889 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:59:16.432882   27889 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:59:16.432887   27889 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:59:16.432894   27889 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:59:16.432901   27889 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:59:16.432905   27889 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:59:16.432909   27889 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:59:16.432912   27889 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:59:16.432920   27889 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:59:16.432927   27889 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:59:16.432931   27889 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:59:16.432937   27889 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:59:16.432941   27889 cri.go:89] found id: ""
	I1020 11:59:16.432985   27889 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:59:16.448911   27889 out.go:203] 
	W1020 11:59:16.450451   27889 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:59:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:59:16.450480   27889 out.go:285] * 
	* 
	W1020 11:59:16.453446   27889 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:59:16.454882   27889 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-pcd5k" [771d33eb-3b8b-487a-bd72-dca77feff4e4] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.002885447s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053741 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053741 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (238.842586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 11:58:57.992220   25461 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:58:57.992486   25461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:57.992494   25461 out.go:374] Setting ErrFile to fd 2...
	I1020 11:58:57.992499   25461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:58:57.992739   25461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:58:57.992995   25461 mustload.go:65] Loading cluster: addons-053741
	I1020 11:58:57.993311   25461 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:57.993324   25461 addons.go:606] checking whether the cluster is paused
	I1020 11:58:57.993405   25461 config.go:182] Loaded profile config "addons-053741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:57.993417   25461 host.go:66] Checking if "addons-053741" exists ...
	I1020 11:58:57.993795   25461 cli_runner.go:164] Run: docker container inspect addons-053741 --format={{.State.Status}}
	I1020 11:58:58.012539   25461 ssh_runner.go:195] Run: systemctl --version
	I1020 11:58:58.012585   25461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053741
	I1020 11:58:58.031463   25461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/addons-053741/id_rsa Username:docker}
	I1020 11:58:58.130026   25461 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:58:58.130128   25461 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:58:58.160151   25461 cri.go:89] found id: "6df2005c3dce47da660fb5ba279b22da674ff838b9735ddf1ce5624b81d24a2a"
	I1020 11:58:58.160176   25461 cri.go:89] found id: "02ac8e9a477c9969fbbd471b507f06911f06832478cb2ab460a79d25130400c6"
	I1020 11:58:58.160180   25461 cri.go:89] found id: "2d3daf84e6c96abb56054995c7c1793d635a632668815bfff8f7c53f42247638"
	I1020 11:58:58.160184   25461 cri.go:89] found id: "dd4bb1b4f70461ecd9715376a5692c9497203a19b51d5d2e170df8650b700a85"
	I1020 11:58:58.160186   25461 cri.go:89] found id: "6b0cf0f679a40d99370f35cfa9f97fa0cac056c4db8ce0f0067410af4c55e067"
	I1020 11:58:58.160190   25461 cri.go:89] found id: "c4b5fa9dcee146e44195a755c610c7547af0b800331acf012c7ab0657a123956"
	I1020 11:58:58.160193   25461 cri.go:89] found id: "b5d282533aea85ed9df50cc372cc5a932efe466d2241ddd09cc640c8c236a188"
	I1020 11:58:58.160195   25461 cri.go:89] found id: "c6bc622719c6acc17c6ecbd44690ffe6b660c5162f5796c95bd4ee0d3421e95a"
	I1020 11:58:58.160198   25461 cri.go:89] found id: "28a9df06a407b5799fa060a0af2692c4be364e29b164bf656f680ca3568ddfd2"
	I1020 11:58:58.160207   25461 cri.go:89] found id: "8ee09292e70de6a7aa65c1daced3aede2442f82ea7c49c53eb397c9edf808e80"
	I1020 11:58:58.160210   25461 cri.go:89] found id: "cb34c9f1c580c07602b3c3500fb0e4a6761cdf3f16d1deed9d4445d14a5556e1"
	I1020 11:58:58.160213   25461 cri.go:89] found id: "51a80cd6bc076c788b8ed4533a498b2d4ca656f0f79dde9dc6bd2eff97705a37"
	I1020 11:58:58.160215   25461 cri.go:89] found id: "d9a30b9299a6ea229aa55765514dbd0ce9704add6dfecf803b3cbf525bd65431"
	I1020 11:58:58.160218   25461 cri.go:89] found id: "9370bc1dd29d31a20c0e0ee212ac3605addc07724b500868ce1f29bd4ae44600"
	I1020 11:58:58.160220   25461 cri.go:89] found id: "fa80ac0b9cd9c9bcc20f0e7ff0b3663829fd96c557d051a0b38742182f2b9b76"
	I1020 11:58:58.160224   25461 cri.go:89] found id: "67371a5015804f311f35cf6b502dd9e774a4c9a732d3466040928f15493643b3"
	I1020 11:58:58.160226   25461 cri.go:89] found id: "0f15b4706c7716ea6931d4447cb382cd0d551cec7adb1b9dc52960000534e31a"
	I1020 11:58:58.160230   25461 cri.go:89] found id: "b5c7f9c4b30eb573a95917509855474a977bbf0327f8143c2639b12c87f28e04"
	I1020 11:58:58.160233   25461 cri.go:89] found id: "52948a7351d922dd08f31ab839ef857c7d3aab1470afb761184a14fcc7d63238"
	I1020 11:58:58.160235   25461 cri.go:89] found id: "daef0b8bb4e243fd440f59e4fb3ba29d66995159cd227377d2d03ff7118880f4"
	I1020 11:58:58.160238   25461 cri.go:89] found id: "3638400d972a383e443ba5687d677dad476b05611a441c5a8b8683711aafa26b"
	I1020 11:58:58.160240   25461 cri.go:89] found id: "fac7a84a8cd033d3cd200605fae1e2521eb4b24b00082988402e46e8728cc6ef"
	I1020 11:58:58.160243   25461 cri.go:89] found id: "a165b7f5e69ecdb50981fdd8b1b712843fe1a82d34cb7d59937a646fcc40ce5b"
	I1020 11:58:58.160245   25461 cri.go:89] found id: "d6564015bbe91daf0b1b559316e3f63b2a1f150a3ce2de40e6d33a254f3509e5"
	I1020 11:58:58.160258   25461 cri.go:89] found id: ""
	I1020 11:58:58.160299   25461 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 11:58:58.174608   25461 out.go:203] 
	W1020 11:58:58.176479   25461 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T11:58:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 11:58:58.176497   25461 out.go:285] * 
	* 
	W1020 11:58:58.179503   25461 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 11:58:58.180976   25461 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-053741 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-012564 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-012564 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cfmmv" [b1fa8314-b0b6-4ce3-8e80-f58791673348] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012564 -n functional-012564
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-20 12:14:30.991203919 +0000 UTC m=+1113.951639042
functional_test.go:1645: (dbg) Run:  kubectl --context functional-012564 describe po hello-node-connect-7d85dfc575-cfmmv -n default
functional_test.go:1645: (dbg) kubectl --context functional-012564 describe po hello-node-connect-7d85dfc575-cfmmv -n default:
Name:             hello-node-connect-7d85dfc575-cfmmv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-012564/192.168.49.2
Start Time:       Mon, 20 Oct 2025 12:04:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whr5x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-whr5x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cfmmv to functional-012564
Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-012564 logs hello-node-connect-7d85dfc575-cfmmv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-012564 logs hello-node-connect-7d85dfc575-cfmmv -n default: exit status 1 (73.981906ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cfmmv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-012564 logs hello-node-connect-7d85dfc575-cfmmv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-012564 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cfmmv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-012564/192.168.49.2
Start Time:       Mon, 20 Oct 2025 12:04:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whr5x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-whr5x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cfmmv to functional-012564
Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-012564 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-012564 logs -l app=hello-node-connect: exit status 1 (66.935536ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cfmmv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-012564 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-012564 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.27.46
IPs:                      10.100.27.46
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31245/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012564
helpers_test.go:243: (dbg) docker inspect functional-012564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513",
	        "Created": "2025-10-20T12:02:46.497936753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:02:46.531069547Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513/hostname",
	        "HostsPath": "/var/lib/docker/containers/64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513/hosts",
	        "LogPath": "/var/lib/docker/containers/64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513/64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513-json.log",
	        "Name": "/functional-012564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012564:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64daf4419e7bc8247f99d3bc3a2c0fed2692f12a51e9f20f1a04049ca6d49513",
	                "LowerDir": "/var/lib/docker/overlay2/57dd5b841ba51e8fd88b3339d6c935ad06b2ff651e942ccb9aa8513ad9e9909b-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57dd5b841ba51e8fd88b3339d6c935ad06b2ff651e942ccb9aa8513ad9e9909b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57dd5b841ba51e8fd88b3339d6c935ad06b2ff651e942ccb9aa8513ad9e9909b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57dd5b841ba51e8fd88b3339d6c935ad06b2ff651e942ccb9aa8513ad9e9909b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012564",
	                "Source": "/var/lib/docker/volumes/functional-012564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012564",
	                "name.minikube.sigs.k8s.io": "functional-012564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b4b32a8aa89c228b9e6eed51885906118563f9fdf80a623819f70d561a12782",
	            "SandboxKey": "/var/run/docker/netns/9b4b32a8aa89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:8c:b1:aa:0d:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6224367e04555f4cdcdc35eb9fc502382e64befa2b76285a9392b09e4b92a933",
	                    "EndpointID": "8d6c4bc46213900680128434284d4e78446d87a15f9a2e9c828e8fe11c0f491b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012564",
	                        "64daf4419e7b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012564 -n functional-012564
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-012564 logs -n 25: (1.343059916s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-012564 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │                     │
	│ start          │ -p functional-012564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-012564 --alsologtostderr -v=1                                            │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /etc/ssl/certs/14592.pem                                                   │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /usr/share/ca-certificates/14592.pem                                       │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /etc/ssl/certs/51391683.0                                                  │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /etc/ssl/certs/145922.pem                                                  │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /usr/share/ca-certificates/145922.pem                                      │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                  │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ ssh            │ functional-012564 ssh sudo cat /etc/test/nested/copy/14592/hosts                                          │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:04 UTC │ 20 Oct 25 12:04 UTC │
	│ image          │ functional-012564 image ls --format short --alsologtostderr                                               │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ image          │ functional-012564 image ls --format yaml --alsologtostderr                                                │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ ssh            │ functional-012564 ssh pgrep buildkitd                                                                     │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │                     │
	│ image          │ functional-012564 image build -t localhost/my-image:functional-012564 testdata/build --alsologtostderr    │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ image          │ functional-012564 image ls                                                                                │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ image          │ functional-012564 image ls --format json --alsologtostderr                                                │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ image          │ functional-012564 image ls --format table --alsologtostderr                                               │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ update-context │ functional-012564 update-context --alsologtostderr -v=2                                                   │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ update-context │ functional-012564 update-context --alsologtostderr -v=2                                                   │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ update-context │ functional-012564 update-context --alsologtostderr -v=2                                                   │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:05 UTC │ 20 Oct 25 12:05 UTC │
	│ service        │ functional-012564 service list                                                                            │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:14 UTC │ 20 Oct 25 12:14 UTC │
	│ service        │ functional-012564 service list -o json                                                                    │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:14 UTC │ 20 Oct 25 12:14 UTC │
	│ service        │ functional-012564 service --namespace=default --https --url hello-node                                    │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:14 UTC │                     │
	│ service        │ functional-012564 service hello-node --url --format={{.IP}}                                               │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:14 UTC │                     │
	│ service        │ functional-012564 service hello-node --url                                                                │ functional-012564 │ jenkins │ v1.37.0 │ 20 Oct 25 12:14 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:04:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:04:47.128631   52737 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:04:47.128886   52737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:47.128893   52737 out.go:374] Setting ErrFile to fd 2...
	I1020 12:04:47.128898   52737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:47.129221   52737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:04:47.129674   52737 out.go:368] Setting JSON to false
	I1020 12:04:47.130617   52737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2836,"bootTime":1760959051,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:04:47.130710   52737 start.go:141] virtualization: kvm guest
	I1020 12:04:47.132579   52737 out.go:179] * [functional-012564] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1020 12:04:47.134144   52737 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:04:47.134129   52737 notify.go:220] Checking for updates...
	I1020 12:04:47.136817   52737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:04:47.138149   52737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:04:47.139582   52737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:04:47.140963   52737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:04:47.142338   52737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:04:47.143925   52737 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:04:47.144431   52737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:04:47.170585   52737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:04:47.170731   52737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:04:47.227964   52737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-20 12:04:47.218095466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:04:47.228077   52737 docker.go:318] overlay module found
	I1020 12:04:47.229949   52737 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1020 12:04:47.231160   52737 start.go:305] selected driver: docker
	I1020 12:04:47.231174   52737 start.go:925] validating driver "docker" against &{Name:functional-012564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012564 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:04:47.231256   52737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:04:47.233333   52737 out.go:203] 
	W1020 12:04:47.234835   52737 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1020 12:04:47.236175   52737 out.go:203] 
	
	
	==> CRI-O <==
	Oct 20 12:05:01 functional-012564 crio[3566]: time="2025-10-20T12:05:01.007513512Z" level=info msg="Starting container: a0a7450711b4faf0296608597da38cf62066f7ba1312710ef678f59d87e48d2f" id=41c58304-a15f-4ff3-9d6f-b540b22ab27b name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:05:01 functional-012564 crio[3566]: time="2025-10-20T12:05:01.010516534Z" level=info msg="Started container" PID=7307 containerID=a0a7450711b4faf0296608597da38cf62066f7ba1312710ef678f59d87e48d2f description=default/mysql-5bb876957f-7kthf/mysql id=41c58304-a15f-4ff3-9d6f-b540b22ab27b name=/runtime.v1.RuntimeService/StartContainer sandboxID=fcc6df2b1647f7e7d5533f8747d8450afa075cc6b27e8fe5fa87830db1441eba
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.342916426Z" level=info msg="Stopping pod sandbox: 39e74de05b9e18f4c9962b23ce04ca7407087ceb38485b95604ae6385f24aa11" id=91dab8ae-cbf4-46eb-9696-6f20b8264777 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.342996843Z" level=info msg="Stopped pod sandbox (already stopped): 39e74de05b9e18f4c9962b23ce04ca7407087ceb38485b95604ae6385f24aa11" id=91dab8ae-cbf4-46eb-9696-6f20b8264777 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.344005637Z" level=info msg="Removing pod sandbox: 39e74de05b9e18f4c9962b23ce04ca7407087ceb38485b95604ae6385f24aa11" id=e10931d1-6222-400d-afeb-f6d3287433f1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.347650129Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.347710356Z" level=info msg="Removed pod sandbox: 39e74de05b9e18f4c9962b23ce04ca7407087ceb38485b95604ae6385f24aa11" id=e10931d1-6222-400d-afeb-f6d3287433f1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.348290467Z" level=info msg="Stopping pod sandbox: e0114200ca52e248aeff130dbacfa2c1edb52a16fd6581ad4b8e95cfe4839893" id=2453269c-8012-4c3e-8bbe-1f4a0488c68d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.348343135Z" level=info msg="Stopped pod sandbox (already stopped): e0114200ca52e248aeff130dbacfa2c1edb52a16fd6581ad4b8e95cfe4839893" id=2453269c-8012-4c3e-8bbe-1f4a0488c68d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.348862757Z" level=info msg="Removing pod sandbox: e0114200ca52e248aeff130dbacfa2c1edb52a16fd6581ad4b8e95cfe4839893" id=3dbef509-d05c-454d-b7a4-bfba53960c7d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.352175888Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.352251584Z" level=info msg="Removed pod sandbox: e0114200ca52e248aeff130dbacfa2c1edb52a16fd6581ad4b8e95cfe4839893" id=3dbef509-d05c-454d-b7a4-bfba53960c7d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.353219251Z" level=info msg="Stopping pod sandbox: 6ddd4dce5c186bd857b965f4ae9f7b31b88c6b78810c07a25521514cdbb043fa" id=d3be373d-7f06-42c5-a647-5a8cf71f5328 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.353281387Z" level=info msg="Stopped pod sandbox (already stopped): 6ddd4dce5c186bd857b965f4ae9f7b31b88c6b78810c07a25521514cdbb043fa" id=d3be373d-7f06-42c5-a647-5a8cf71f5328 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.353610448Z" level=info msg="Removing pod sandbox: 6ddd4dce5c186bd857b965f4ae9f7b31b88c6b78810c07a25521514cdbb043fa" id=a6d99d3f-ebb0-4323-84a6-87d52762acb1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.356924417Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:05:02 functional-012564 crio[3566]: time="2025-10-20T12:05:02.35699576Z" level=info msg="Removed pod sandbox: 6ddd4dce5c186bd857b965f4ae9f7b31b88c6b78810c07a25521514cdbb043fa" id=a6d99d3f-ebb0-4323-84a6-87d52762acb1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:05:09 functional-012564 crio[3566]: time="2025-10-20T12:05:09.353810512Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=75f64ed1-8128-438e-b951-c06b37d21a14 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:05:11 functional-012564 crio[3566]: time="2025-10-20T12:05:11.354286708Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0f53d84d-51b3-415b-8ee5-300c413eaee1 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:05:59 functional-012564 crio[3566]: time="2025-10-20T12:05:59.353785577Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d65cb4a9-5a28-49f6-85ac-dfe8a0d56c47 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:05:59 functional-012564 crio[3566]: time="2025-10-20T12:05:59.354652756Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4cd94960-08ad-4a83-af71-edfc9ad01f37 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:07:30 functional-012564 crio[3566]: time="2025-10-20T12:07:30.35372449Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dbec27fa-dcc8-4693-9b17-301ce49bda00 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:07:32 functional-012564 crio[3566]: time="2025-10-20T12:07:32.354839993Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5f844a8e-eab3-428f-b95c-27528606c9c9 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:10:13 functional-012564 crio[3566]: time="2025-10-20T12:10:13.354329459Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=99868bae-037b-43da-8ed8-30d132a49604 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:10:22 functional-012564 crio[3566]: time="2025-10-20T12:10:22.353817297Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7a271fb3-c7db-4dfa-bf08-0f4850440986 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a0a7450711b4f       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   fcc6df2b1647f       mysql-5bb876957f-7kthf                       default
	cc76a711652af       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   84884935454ce       sp-pod                                       default
	db3835ee8e00c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   3f5f9e2d3f023       kubernetes-dashboard-855c9754f9-hxrg4        kubernetes-dashboard
	1a38a6aefbfb1       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   a090399583669       dashboard-metrics-scraper-77bf4d6c4c-6hrfr   kubernetes-dashboard
	aadd275a9ab67       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   58f5c5db773fd       busybox-mount                                default
	7ef2b9a557b6c       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   7b50d5a5790da       nginx-svc                                    default
	f8f02f64a4395       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   0ee0864b43206       kube-controller-manager-functional-012564    kube-system
	4e72a0a1bbca0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   5c0979ce83a53       kube-apiserver-functional-012564             kube-system
	51d97415c8a06       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   0ee0864b43206       kube-controller-manager-functional-012564    kube-system
	d64a1130b02ee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   75db16231881b       etcd-functional-012564                       kube-system
	391bdfa78c7c3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   fe7807fbcf87b       kindnet-fdtsw                                kube-system
	27e3e40612bb8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   c2d59959e465d       kube-scheduler-functional-012564             kube-system
	b2b978090e25f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   bd3db92a0f2f6       storage-provisioner                          kube-system
	3881e5c98ae64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   1773332da94ce       kube-proxy-tbjqv                             kube-system
	3b1a2c8492d8d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   002f05a9a5d12       coredns-66bc5c9577-wvvvr                     kube-system
	9386e14f8e16d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   002f05a9a5d12       coredns-66bc5c9577-wvvvr                     kube-system
	493a793ac3fd5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   bd3db92a0f2f6       storage-provisioner                          kube-system
	983936e0433e5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   fe7807fbcf87b       kindnet-fdtsw                                kube-system
	192fecddca73a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   1773332da94ce       kube-proxy-tbjqv                             kube-system
	21216461a3910       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   c2d59959e465d       kube-scheduler-functional-012564             kube-system
	bf2132d006df2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   75db16231881b       etcd-functional-012564                       kube-system
	
	
	==> coredns [3b1a2c8492d8d8f7a0854f4ae569553040b6dcd0d856b7486765bc26ee3a9d9f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44436 - 45950 "HINFO IN 4600868838193633351.7455284011599346343. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033011489s
	
	
	==> coredns [9386e14f8e16d7b9a82a6c00a6cb28c7fb18428c52df5761aa737d0a431c7654] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50194 - 44699 "HINFO IN 4440069584458905663.1643376361356568686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02291026s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-012564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-012564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=functional-012564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_03_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:02:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-012564
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:14:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:14:26 +0000   Mon, 20 Oct 2025 12:02:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:14:26 +0000   Mon, 20 Oct 2025 12:02:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:14:26 +0000   Mon, 20 Oct 2025 12:02:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:14:26 +0000   Mon, 20 Oct 2025 12:03:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-012564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                1adb068d-60d4-469f-a4b9-ec6d1a9cc6c0
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rrp5g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-cfmmv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-7kthf                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m37s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 coredns-66bc5c9577-wvvvr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-012564                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-fdtsw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-012564              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-012564     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-tbjqv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-012564              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-6hrfr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hxrg4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-012564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-012564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-012564 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-012564 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-012564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-012564 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                node-controller  Node functional-012564 event: Registered Node functional-012564 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-012564 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-012564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-012564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-012564 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-012564 event: Registered Node functional-012564 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [bf2132d006df279fb088d688bb651175272c48d2786648827cef08697f396007] <==
	{"level":"warn","ts":"2025-10-20T12:02:59.102502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:02:59.109044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:02:59.115284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:02:59.128272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:02:59.134587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:02:59.141857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:02:59.192853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:03:42.945928Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-20T12:03:42.946018Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-012564","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-20T12:03:42.946111Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-20T12:03:49.948058Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-20T12:03:49.949483Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T12:03:49.949490Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-20T12:03:49.949579Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-20T12:03:49.949705Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-20T12:03:49.949706Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-20T12:03:49.949732Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T12:03:49.949759Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-20T12:03:49.949792Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-20T12:03:49.949814Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-20T12:03:49.949823Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T12:03:49.951854Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-20T12:03:49.951920Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T12:03:49.951952Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-20T12:03:49.951961Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-012564","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d64a1130b02eed5e7dc7078289282b5654f8074baf5d51c12a3e3e12fde5ac85] <==
	{"level":"warn","ts":"2025-10-20T12:04:03.872571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.878419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.885505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.892818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.900565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.907601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.915755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.922813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.929033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.934937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.942088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.949336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.955560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.961639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.967627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.974727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.980792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:03.986827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:04.000464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:04.006810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:04.014052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:04:04.065606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44500","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:14:03.570698Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1146}
	{"level":"info","ts":"2025-10-20T12:14:03.591459Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1146,"took":"20.442167ms","hash":1872903845,"current-db-size-bytes":3547136,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-20T12:14:03.591514Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1872903845,"revision":1146,"compact-revision":-1}
	
	
	==> kernel <==
	 12:14:32 up 57 min,  0 user,  load average: 0.64, 0.34, 0.38
	Linux functional-012564 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [391bdfa78c7c34d6f7512212a17db16c5e3ffcc443d9818cc9674570d9190f8b] <==
	I1020 12:12:23.466849       1 main.go:301] handling current node
	I1020 12:12:33.469930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:12:33.469966       1 main.go:301] handling current node
	I1020 12:12:43.467055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:12:43.467090       1 main.go:301] handling current node
	I1020 12:12:53.462740       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:12:53.462785       1 main.go:301] handling current node
	I1020 12:13:03.461067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:13:03.461114       1 main.go:301] handling current node
	I1020 12:13:13.464860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:13:13.464892       1 main.go:301] handling current node
	I1020 12:13:23.461328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:13:23.461368       1 main.go:301] handling current node
	I1020 12:13:33.469432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:13:33.469464       1 main.go:301] handling current node
	I1020 12:13:43.465058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:13:43.465089       1 main.go:301] handling current node
	I1020 12:13:53.461897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:13:53.461947       1 main.go:301] handling current node
	I1020 12:14:03.460876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:14:03.460935       1 main.go:301] handling current node
	I1020 12:14:13.465036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:14:13.465098       1 main.go:301] handling current node
	I1020 12:14:23.467414       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:14:23.467447       1 main.go:301] handling current node
	
	
	==> kindnet [983936e0433e5a604c7646f3c19159f78eb19cc4a7732bfd2d3c6d3024563a57] <==
	I1020 12:03:07.989788       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:03:07.990090       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1020 12:03:07.990247       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:03:07.990266       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:03:07.990290       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:03:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:03:08.191573       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:03:08.191593       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:03:08.191604       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:03:08.282323       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:03:08.492699       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:03:08.492731       1 metrics.go:72] Registering metrics
	I1020 12:03:08.492832       1 controller.go:711] "Syncing nftables rules"
	I1020 12:03:18.192225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:03:18.192326       1 main.go:301] handling current node
	I1020 12:03:28.199235       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:03:28.199274       1 main.go:301] handling current node
	I1020 12:03:38.196598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:03:38.196636       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4e72a0a1bbca0c0381c7a6a878de598aa93b6481a97842487d096bb5f49d9f1c] <==
	I1020 12:04:04.587059       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:04:04.587228       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:04:05.442196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:04:05.502493       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1020 12:04:05.748606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1020 12:04:05.749904       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:04:05.755049       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:04:06.215244       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:04:06.308069       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:04:06.360992       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:04:06.366370       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:04:07.441017       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:04:20.933384       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.91.120"}
	I1020 12:04:25.365258       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.68.252"}
	I1020 12:04:26.672875       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.12.166"}
	I1020 12:04:30.658248       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.27.46"}
	I1020 12:04:48.076381       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:04:48.182298       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.80.128"}
	I1020 12:04:48.198579       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.211.233"}
	E1020 12:04:52.245072       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49360: use of closed network connection
	I1020 12:04:55.412528       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.174.151"}
	E1020 12:05:01.078467       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59704: use of closed network connection
	E1020 12:05:08.538332       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60242: use of closed network connection
	E1020 12:05:09.334640       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60254: use of closed network connection
	I1020 12:14:04.498637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [51d97415c8a061e5f90b26def63bdbb8fc7b795fdd8e15b75c290c3c1a95ed42] <==
	I1020 12:03:52.091215       1 serving.go:386] Generated self-signed cert in-memory
	I1020 12:03:52.524737       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1020 12:03:52.524757       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:03:52.526109       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1020 12:03:52.526112       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1020 12:03:52.526288       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1020 12:03:52.526379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1020 12:04:02.527671       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [f8f02f64a43954724e210cbdaa827f3b06eee87ad7eaee3eb39ed043233565fd] <==
	I1020 12:04:07.318554       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 12:04:07.320812       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:04:07.322046       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:04:07.324256       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:04:07.334657       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 12:04:07.335818       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:04:07.335968       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 12:04:07.335972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:04:07.336068       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:04:07.336080       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:04:07.336274       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:04:07.336461       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:04:07.338282       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:04:07.340814       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:04:07.340931       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:04:07.344663       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:04:07.347913       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 12:04:07.349043       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:04:07.361368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1020 12:04:48.129563       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1020 12:04:48.133316       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1020 12:04:48.136906       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1020 12:04:48.137619       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1020 12:04:48.141287       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1020 12:04:48.146866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [192fecddca73ae24aac0efc9301d7a3ebab0d71e27ecf14f2a29907b5c601b16] <==
	I1020 12:03:07.913555       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:03:07.972355       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:03:08.072566       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:03:08.072599       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 12:03:08.072666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:03:08.093174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:03:08.093223       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:03:08.099345       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:03:08.099803       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:03:08.099837       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:03:08.101291       1 config.go:200] "Starting service config controller"
	I1020 12:03:08.101322       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:03:08.101352       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:03:08.101379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:03:08.101460       1 config.go:309] "Starting node config controller"
	I1020 12:03:08.101466       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:03:08.101732       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:03:08.101835       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:03:08.201541       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:03:08.201562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:03:08.201589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:03:08.202903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [3881e5c98ae64b5d0f3800262471f140859052e57bfa0de46820f64b786fc664] <==
	I1020 12:03:43.258941       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:03:43.359959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:03:43.360002       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 12:03:43.360089       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:03:43.379482       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:03:43.379540       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:03:43.385371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:03:43.385860       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:03:43.385890       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:03:43.387397       1 config.go:200] "Starting service config controller"
	I1020 12:03:43.387415       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:03:43.387417       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:03:43.387448       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:03:43.387489       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:03:43.387514       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:03:43.387539       1 config.go:309] "Starting node config controller"
	I1020 12:03:43.387619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:03:43.387630       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:03:43.487616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:03:43.487685       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:03:43.487708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	E1020 12:04:04.504107       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1020 12:04:04.504107       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:04:04.504101       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1020 12:04:04.504124       1 reflector.go:205] "Failed to watch" err="nodes \"functional-012564\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [21216461a39105cfed3bdf720f74559faf99ccfda2cf4a36ca0655e6a5cadbca] <==
	E1020 12:02:59.586073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:02:59.586264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:02:59.586260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:02:59.586306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:03:00.416478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:03:00.423727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:03:00.436969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:03:00.437837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:03:00.438550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:03:00.445550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:03:00.449413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:03:00.482135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:03:00.518627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1020 12:03:00.534021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:03:00.562297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:03:00.686957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:03:00.811440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:03:00.816514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1020 12:03:03.181486       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:03:42.836844       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:03:42.836911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1020 12:03:42.837014       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1020 12:03:42.837048       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1020 12:03:42.837101       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1020 12:03:42.837126       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [27e3e40612bb8a4d6c90afff9b955ad9883e115104b9b0fc17c9eded4fe4370d] <==
	I1020 12:03:51.291972       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:03:51.291956       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:03:51.292411       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:03:51.292473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:03:51.392037       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 12:03:51.392103       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:03:51.392137       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1020 12:04:04.481198       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1020 12:04:04.481250       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1020 12:04:04.490789       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:04:04.490828       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:04:04.490852       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:04:04.490952       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:04:04.490987       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:04:04.491031       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:04:04.491080       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:04:04.491104       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:04:04.491495       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:04:04.491523       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:04:04.491559       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:04:04.491764       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:04:04.491848       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:04:04.491908       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:04:04.491915       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:04:04.492062       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	
	
	==> kubelet <==
	Oct 20 12:11:50 functional-012564 kubelet[4287]: E1020 12:11:50.352974    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:11:57 functional-012564 kubelet[4287]: E1020 12:11:57.353692    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:12:02 functional-012564 kubelet[4287]: E1020 12:12:02.353457    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:12:12 functional-012564 kubelet[4287]: E1020 12:12:12.354039    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:12:13 functional-012564 kubelet[4287]: E1020 12:12:13.353391    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:12:24 functional-012564 kubelet[4287]: E1020 12:12:24.353295    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:12:26 functional-012564 kubelet[4287]: E1020 12:12:26.353926    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:12:37 functional-012564 kubelet[4287]: E1020 12:12:37.353238    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:12:41 functional-012564 kubelet[4287]: E1020 12:12:41.353309    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:12:50 functional-012564 kubelet[4287]: E1020 12:12:50.354089    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:12:56 functional-012564 kubelet[4287]: E1020 12:12:56.353495    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:13:01 functional-012564 kubelet[4287]: E1020 12:13:01.353547    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:13:08 functional-012564 kubelet[4287]: E1020 12:13:08.353354    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:13:14 functional-012564 kubelet[4287]: E1020 12:13:14.353186    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:13:22 functional-012564 kubelet[4287]: E1020 12:13:22.353793    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:13:29 functional-012564 kubelet[4287]: E1020 12:13:29.353407    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:13:36 functional-012564 kubelet[4287]: E1020 12:13:36.353915    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:13:41 functional-012564 kubelet[4287]: E1020 12:13:41.353414    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:13:48 functional-012564 kubelet[4287]: E1020 12:13:48.355097    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:13:55 functional-012564 kubelet[4287]: E1020 12:13:55.353694    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:14:01 functional-012564 kubelet[4287]: E1020 12:14:01.352816    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:14:09 functional-012564 kubelet[4287]: E1020 12:14:09.353210    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:14:14 functional-012564 kubelet[4287]: E1020 12:14:14.353313    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	Oct 20 12:14:22 functional-012564 kubelet[4287]: E1020 12:14:22.353894    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-rrp5g" podUID="9a7b79cc-cbd7-4f22-8354-a61affcf9457"
	Oct 20 12:14:28 functional-012564 kubelet[4287]: E1020 12:14:28.353493    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cfmmv" podUID="b1fa8314-b0b6-4ce3-8e80-f58791673348"
	
	
	==> kubernetes-dashboard [db3835ee8e00c93d3f51f371eb211c27e35fdcbeffc8090db04dc48f848c891b] <==
	2025/10/20 12:04:52 Using namespace: kubernetes-dashboard
	2025/10/20 12:04:52 Using in-cluster config to connect to apiserver
	2025/10/20 12:04:52 Using secret token for csrf signing
	2025/10/20 12:04:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:04:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:04:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:04:52 Generating JWE encryption key
	2025/10/20 12:04:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:04:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:04:52 Initializing JWE encryption key from synchronized object
	2025/10/20 12:04:52 Creating in-cluster Sidecar client
	2025/10/20 12:04:52 Successful request to sidecar
	2025/10/20 12:04:52 Serving insecurely on HTTP port: 9090
	2025/10/20 12:04:52 Starting overwatch
	
	
	==> storage-provisioner [493a793ac3fd5e18fe45f8a1a2b80c6dbf64cd058155b1747544fc914058cc63] <==
	W1020 12:03:18.963087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:18.967054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:03:19.061527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-012564_03999ef1-6513-4148-9d16-0a995dea4ea0!
	W1020 12:03:20.970864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:20.975128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:22.978005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:22.982826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:24.986035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:24.989889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:26.992584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:26.996834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:28.999888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:29.004969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:31.008474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:31.012507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:33.015522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:33.019308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:35.022265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:35.035801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:37.038721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:37.042545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:39.046345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:39.051166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:41.054156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:41.058160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b2b978090e25fa140f8f88608f48b80dcf01ed9a76b714ab2ff07dab81aade85] <==
	W1020 12:14:08.347428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:10.350122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:10.354016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:12.356716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:12.360588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:14.364454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:14.369611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:16.373320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:16.377089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:18.379475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:18.383176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:20.385468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:20.389258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:22.392358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:22.396907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:24.400586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:24.405000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:26.407551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:26.411529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:28.413945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:28.418344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:30.420945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:30.425227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:32.428700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:14:32.432929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012564 -n functional-012564
helpers_test.go:269: (dbg) Run:  kubectl --context functional-012564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-rrp5g hello-node-connect-7d85dfc575-cfmmv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-012564 describe pod busybox-mount hello-node-75c85bcc94-rrp5g hello-node-connect-7d85dfc575-cfmmv
helpers_test.go:290: (dbg) kubectl --context functional-012564 describe pod busybox-mount hello-node-75c85bcc94-rrp5g hello-node-connect-7d85dfc575-cfmmv:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-012564/192.168.49.2
	Start Time:       Mon, 20 Oct 2025 12:04:38 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://aadd275a9ab6761defeb3995e144e7aeeeee370c4080122891b29aef6baf4d33
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 20 Oct 2025 12:04:40 +0000
	      Finished:     Mon, 20 Oct 2025 12:04:40 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phbp6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-phbp6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m54s  default-scheduler  Successfully assigned default/busybox-mount to functional-012564
	  Normal  Pulling    9m54s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.277s (1.277s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m53s  kubelet            Created container: mount-munger
	  Normal  Started    9m53s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-rrp5g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-012564/192.168.49.2
	Start Time:       Mon, 20 Oct 2025 12:04:25 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m4cpm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-m4cpm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rrp5g to functional-012564
	  Normal   Pulling    7m3s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m3s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m3s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-cfmmv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-012564/192.168.49.2
	Start Time:       Mon, 20 Oct 2025 12:04:30 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whr5x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-whr5x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cfmmv to functional-012564
	  Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-012564 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-012564 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-rrp5g" [9a7b79cc-cbd7-4f22-8354-a61affcf9457] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012564 -n functional-012564
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-20 12:14:25.692151152 +0000 UTC m=+1108.652586267
functional_test.go:1460: (dbg) Run:  kubectl --context functional-012564 describe po hello-node-75c85bcc94-rrp5g -n default
functional_test.go:1460: (dbg) kubectl --context functional-012564 describe po hello-node-75c85bcc94-rrp5g -n default:
Name:             hello-node-75c85bcc94-rrp5g
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-012564/192.168.49.2
Start Time:       Mon, 20 Oct 2025 12:04:25 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m4cpm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-m4cpm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rrp5g to functional-012564
Normal   Pulling    6m55s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-012564 logs hello-node-75c85bcc94-rrp5g -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-012564 logs hello-node-75c85bcc94-rrp5g -n default: exit status 1 (69.573515ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-rrp5g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-012564 logs hello-node-75c85bcc94-rrp5g -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image load --daemon kicbase/echo-server:functional-012564 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-012564" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image load --daemon kicbase/echo-server:functional-012564 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-012564" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-012564
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image load --daemon kicbase/echo-server:functional-012564 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-012564" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image save kicbase/echo-server:functional-012564 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1020 12:04:29.914085   48957 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:04:29.914390   48957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:29.914402   48957 out.go:374] Setting ErrFile to fd 2...
	I1020 12:04:29.914406   48957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:29.914663   48957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:04:29.915288   48957 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:04:29.915398   48957 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:04:29.915817   48957 cli_runner.go:164] Run: docker container inspect functional-012564 --format={{.State.Status}}
	I1020 12:04:29.934521   48957 ssh_runner.go:195] Run: systemctl --version
	I1020 12:04:29.934572   48957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012564
	I1020 12:04:29.951809   48957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/functional-012564/id_rsa Username:docker}
	I1020 12:04:30.050445   48957 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1020 12:04:30.050535   48957 cache_images.go:254] Failed to load cached images for "functional-012564": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1020 12:04:30.050571   48957 cache_images.go:266] failed pushing to: functional-012564

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-012564
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image save --daemon kicbase/echo-server:functional-012564 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-012564
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-012564: exit status 1 (18.595228ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-012564

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-012564

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 service --namespace=default --https --url hello-node: exit status 115 (521.251599ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30484
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-012564 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 service hello-node --url --format={{.IP}}: exit status 115 (519.985651ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-012564 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 service hello-node --url: exit status 115 (525.0813ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30484
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-012564 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30484
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.98s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-795500 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-795500 --output=json --user=testUser: exit status 80 (1.984215784s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d34f6be-e94f-42ad-8462-a439d72e053a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-795500 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"837f827b-021f-4dcf-91ad-6de9fddc15f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-20T12:23:31Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"56361473-44be-4b85-999b-1dc757e458ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-795500 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.98s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-795500 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-795500 --output=json --user=testUser: exit status 80 (1.451012586s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc807d6c-ccf0-4497-8d14-b17d463f7a61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-795500 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7f68481c-d9ed-4c84-aa63-e2c73c928350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-20T12:23:33Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"c55de875-e41d-4b6d-befc-c2031a3dfe4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-795500 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.45s)

                                                
                                    
x
+
TestPause/serial/Pause (5.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-918853 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-918853 --alsologtostderr -v=5: exit status 80 (1.805146062s)

                                                
                                                
-- stdout --
	* Pausing node pause-918853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:38:49.048290  220470 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:38:49.048641  220470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:49.048674  220470 out.go:374] Setting ErrFile to fd 2...
	I1020 12:38:49.048687  220470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:49.048968  220470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:38:49.049320  220470 out.go:368] Setting JSON to false
	I1020 12:38:49.049388  220470 mustload.go:65] Loading cluster: pause-918853
	I1020 12:38:49.049808  220470 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:49.050305  220470 cli_runner.go:164] Run: docker container inspect pause-918853 --format={{.State.Status}}
	I1020 12:38:49.074043  220470 host.go:66] Checking if "pause-918853" exists ...
	I1020 12:38:49.074394  220470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:38:49.152619  220470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:90 SystemTime:2025-10-20 12:38:49.138365419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:38:49.153499  220470 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-918853 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 12:38:49.156432  220470 out.go:179] * Pausing node pause-918853 ... 
	I1020 12:38:49.157744  220470 host.go:66] Checking if "pause-918853" exists ...
	I1020 12:38:49.158156  220470 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:49.158209  220470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:49.179940  220470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:49.292960  220470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:49.309901  220470 pause.go:52] kubelet running: true
	I1020 12:38:49.309967  220470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:38:49.481545  220470 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:38:49.481669  220470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:38:49.592516  220470 cri.go:89] found id: "72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86"
	I1020 12:38:49.592678  220470 cri.go:89] found id: "c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8"
	I1020 12:38:49.592690  220470 cri.go:89] found id: "8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1"
	I1020 12:38:49.592696  220470 cri.go:89] found id: "ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013"
	I1020 12:38:49.592701  220470 cri.go:89] found id: "84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b"
	I1020 12:38:49.592733  220470 cri.go:89] found id: "e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96"
	I1020 12:38:49.592738  220470 cri.go:89] found id: "9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c"
	I1020 12:38:49.592742  220470 cri.go:89] found id: ""
	I1020 12:38:49.592882  220470 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:38:49.608316  220470 retry.go:31] will retry after 243.774483ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:38:49Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:38:49.852743  220470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:49.868988  220470 pause.go:52] kubelet running: false
	I1020 12:38:49.869051  220470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:38:50.014344  220470 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:38:50.014427  220470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:38:50.092741  220470 cri.go:89] found id: "72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86"
	I1020 12:38:50.092767  220470 cri.go:89] found id: "c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8"
	I1020 12:38:50.092783  220470 cri.go:89] found id: "8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1"
	I1020 12:38:50.092788  220470 cri.go:89] found id: "ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013"
	I1020 12:38:50.092792  220470 cri.go:89] found id: "84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b"
	I1020 12:38:50.092796  220470 cri.go:89] found id: "e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96"
	I1020 12:38:50.092800  220470 cri.go:89] found id: "9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c"
	I1020 12:38:50.092804  220470 cri.go:89] found id: ""
	I1020 12:38:50.092866  220470 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:38:50.106620  220470 retry.go:31] will retry after 445.952346ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:38:50Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:38:50.552998  220470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:50.566561  220470 pause.go:52] kubelet running: false
	I1020 12:38:50.566618  220470 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:38:50.691657  220470 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:38:50.691744  220470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:38:50.766302  220470 cri.go:89] found id: "72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86"
	I1020 12:38:50.766329  220470 cri.go:89] found id: "c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8"
	I1020 12:38:50.766335  220470 cri.go:89] found id: "8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1"
	I1020 12:38:50.766340  220470 cri.go:89] found id: "ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013"
	I1020 12:38:50.766345  220470 cri.go:89] found id: "84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b"
	I1020 12:38:50.766349  220470 cri.go:89] found id: "e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96"
	I1020 12:38:50.766353  220470 cri.go:89] found id: "9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c"
	I1020 12:38:50.766358  220470 cri.go:89] found id: ""
	I1020 12:38:50.766416  220470 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:38:50.781884  220470 out.go:203] 
	W1020 12:38:50.783511  220470 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:38:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:38:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:38:50.783534  220470 out.go:285] * 
	* 
	W1020 12:38:50.788524  220470 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:38:50.790607  220470 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-918853 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-918853
helpers_test.go:243: (dbg) docker inspect pause-918853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f",
	        "Created": "2025-10-20T12:38:00.784742664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:38:01.274033494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/hostname",
	        "HostsPath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/hosts",
	        "LogPath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f-json.log",
	        "Name": "/pause-918853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-918853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-918853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f",
	                "LowerDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-918853",
	                "Source": "/var/lib/docker/volumes/pause-918853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-918853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-918853",
	                "name.minikube.sigs.k8s.io": "pause-918853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21c79fe1b9eab44c04189d745b529f4063130db476e56f2a6f80f010d9ce34dc",
	            "SandboxKey": "/var/run/docker/netns/21c79fe1b9ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-918853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:81:4a:34:8f:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1da2f5d7872345588dd336e9fa2645feab1c8f2b3c0bf2980c7ba8e6bcbd92e5",
	                    "EndpointID": "cb7db1c9949cd8e19e204d738707345117b9ce2d5c91e1f55a97e950dfcd4cd8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-918853",
	                        "045b9ae9e173"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-918853 -n pause-918853
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-918853 -n pause-918853: exit status 2 (365.80092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-918853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-918853 logs -n 25: (1.0457939s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-312375 sudo journalctl -xeu kubelet --all --full --no-pager                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat docker --no-pager                                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/docker/daemon.json                                                                           │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo docker system info                                                                                    │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cri-dockerd --version                                                                                 │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/containerd/config.toml                                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo containerd config dump                                                                                │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat crio --no-pager                                                                         │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo crio config                                                                                           │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p cilium-312375                                                                                                            │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-365628    │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ start   │ -p force-systemd-flag-670413 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ start   │ -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ pause   │ -p pause-918853 --alsologtostderr -v=5                                                                                      │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:38:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:38:37.168921  216515 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:38:37.169239  216515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:37.169251  216515 out.go:374] Setting ErrFile to fd 2...
	I1020 12:38:37.169257  216515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:37.169499  216515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:38:37.169959  216515 out.go:368] Setting JSON to false
	I1020 12:38:37.171048  216515 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4866,"bootTime":1760959051,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:38:37.171151  216515 start.go:141] virtualization: kvm guest
	I1020 12:38:37.173632  216515 out.go:179] * [pause-918853] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:38:37.176541  216515 notify.go:220] Checking for updates...
	I1020 12:38:37.176570  216515 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:38:37.177927  216515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:38:37.179545  216515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:38:37.180830  216515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:38:37.182292  216515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:38:37.183884  216515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:38:37.185812  216515 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:37.186365  216515 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:38:37.215979  216515 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:38:37.216183  216515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:38:37.289280  216515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:80 SystemTime:2025-10-20 12:38:37.278081745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:38:37.289453  216515 docker.go:318] overlay module found
	I1020 12:38:37.291743  216515 out.go:179] * Using the docker driver based on existing profile
	I1020 12:38:37.293813  216515 start.go:305] selected driver: docker
	I1020 12:38:37.293829  216515 start.go:925] validating driver "docker" against &{Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:37.293933  216515 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:38:37.294011  216515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:38:37.368892  216515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:95 SystemTime:2025-10-20 12:38:37.358199029 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:38:37.369595  216515 cni.go:84] Creating CNI manager for ""
	I1020 12:38:37.369656  216515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:37.369703  216515 start.go:349] cluster config:
	{Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:37.375912  216515 out.go:179] * Starting "pause-918853" primary control-plane node in "pause-918853" cluster
	I1020 12:38:37.377444  216515 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:38:37.378825  216515 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:38:37.379998  216515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:37.380062  216515 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:38:37.380076  216515 cache.go:58] Caching tarball of preloaded images
	I1020 12:38:37.380060  216515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:38:37.380209  216515 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:38:37.380226  216515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:38:37.380383  216515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/config.json ...
	I1020 12:38:37.407571  216515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:38:37.407594  216515 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:38:37.407607  216515 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:38:37.407640  216515 start.go:360] acquireMachinesLock for pause-918853: {Name:mk965bd38db53d4ac880a0c625135874cb167a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:38:37.407730  216515 start.go:364] duration metric: took 41.997µs to acquireMachinesLock for "pause-918853"
	I1020 12:38:37.407748  216515 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:38:37.407756  216515 fix.go:54] fixHost starting: 
	I1020 12:38:37.408103  216515 cli_runner.go:164] Run: docker container inspect pause-918853 --format={{.State.Status}}
	I1020 12:38:37.430522  216515 fix.go:112] recreateIfNeeded on pause-918853: state=Running err=<nil>
	W1020 12:38:37.430560  216515 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:38:37.979997  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	W1020 12:38:38.002234  210789 cli_runner.go:211] docker container inspect missing-upgrade-123936 --format={{.State.Status}} returned with exit code 1
	I1020 12:38:38.002314  210789 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-123936": docker container inspect missing-upgrade-123936 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123936
	I1020 12:38:38.002326  210789 oci.go:673] temporary error: container missing-upgrade-123936 status is  but expect it to be exited
	I1020 12:38:38.002368  210789 oci.go:88] couldn't shut down missing-upgrade-123936 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-123936": docker container inspect missing-upgrade-123936 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123936
	 
	I1020 12:38:38.002417  210789 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-123936
	I1020 12:38:38.021927  210789 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-123936
	W1020 12:38:38.043882  210789 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-123936 returned with exit code 1
	I1020 12:38:38.043967  210789 cli_runner.go:164] Run: docker network inspect missing-upgrade-123936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:38.065731  210789 cli_runner.go:164] Run: docker network rm missing-upgrade-123936
	I1020 12:38:38.277539  210789 fix.go:124] Sleeping 1 second for extra luck!
	I1020 12:38:39.277682  210789 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:38:39.497738  210789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:38:39.497956  210789 start.go:159] libmachine.API.Create for "missing-upgrade-123936" (driver="docker")
	I1020 12:38:39.497999  210789 client.go:168] LocalClient.Create starting
	I1020 12:38:39.498104  210789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:38:39.498158  210789 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:39.498179  210789 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:39.498298  210789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:38:39.498327  210789 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:39.498340  210789 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:39.498653  210789 cli_runner.go:164] Run: docker network inspect missing-upgrade-123936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:38:39.522181  210789 cli_runner.go:211] docker network inspect missing-upgrade-123936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:38:39.522277  210789 network_create.go:284] running [docker network inspect missing-upgrade-123936] to gather additional debugging logs...
	I1020 12:38:39.522315  210789 cli_runner.go:164] Run: docker network inspect missing-upgrade-123936
	W1020 12:38:39.545186  210789 cli_runner.go:211] docker network inspect missing-upgrade-123936 returned with exit code 1
	I1020 12:38:39.545228  210789 network_create.go:287] error running [docker network inspect missing-upgrade-123936]: docker network inspect missing-upgrade-123936: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-123936 not found
	I1020 12:38:39.545247  210789 network_create.go:289] output of [docker network inspect missing-upgrade-123936]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-123936 not found
	
	** /stderr **
	I1020 12:38:39.545413  210789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:39.567759  210789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:38:39.568734  210789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:38:39.569609  210789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:38:39.570131  210789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:38:39.571135  210789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1da2f5d78723 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8a:9b:da:cb:cc:03} reservation:<nil>}
	I1020 12:38:39.572301  210789 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020842a0}
	I1020 12:38:39.572332  210789 network_create.go:124] attempt to create docker network missing-upgrade-123936 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1020 12:38:39.572406  210789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-123936 missing-upgrade-123936
	I1020 12:38:39.648738  210789 network_create.go:108] docker network missing-upgrade-123936 192.168.94.0/24 created
	I1020 12:38:39.648786  210789 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-123936" container
	I1020 12:38:39.648888  210789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:38:39.672646  210789 cli_runner.go:164] Run: docker volume create missing-upgrade-123936 --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:38:39.693109  210789 oci.go:103] Successfully created a docker volume missing-upgrade-123936
	I1020 12:38:39.693198  210789 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-123936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --entrypoint /usr/bin/test -v missing-upgrade-123936:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1020 12:38:36.568266  215841 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:38:36.568505  215841 start.go:159] libmachine.API.Create for "cert-expiration-365628" (driver="docker")
	I1020 12:38:36.568540  215841 client.go:168] LocalClient.Create starting
	I1020 12:38:36.568623  215841 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:38:36.568665  215841 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.568683  215841 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.568752  215841 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:38:36.568790  215841 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.568816  215841 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.569185  215841 cli_runner.go:164] Run: docker network inspect cert-expiration-365628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:38:36.587910  215841 cli_runner.go:211] docker network inspect cert-expiration-365628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:38:36.587978  215841 network_create.go:284] running [docker network inspect cert-expiration-365628] to gather additional debugging logs...
	I1020 12:38:36.587991  215841 cli_runner.go:164] Run: docker network inspect cert-expiration-365628
	W1020 12:38:36.606011  215841 cli_runner.go:211] docker network inspect cert-expiration-365628 returned with exit code 1
	I1020 12:38:36.606034  215841 network_create.go:287] error running [docker network inspect cert-expiration-365628]: docker network inspect cert-expiration-365628: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-365628 not found
	I1020 12:38:36.606049  215841 network_create.go:289] output of [docker network inspect cert-expiration-365628]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-365628 not found
	
	** /stderr **
	I1020 12:38:36.606246  215841 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:36.626704  215841 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:38:36.627187  215841 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:38:36.627619  215841 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:38:36.628215  215841 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1bc10}
	I1020 12:38:36.628238  215841 network_create.go:124] attempt to create docker network cert-expiration-365628 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 12:38:36.628297  215841 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-365628 cert-expiration-365628
	I1020 12:38:36.692174  215841 network_create.go:108] docker network cert-expiration-365628 192.168.76.0/24 created
	I1020 12:38:36.692200  215841 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-365628" container
	I1020 12:38:36.692292  215841 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:38:36.714335  215841 cli_runner.go:164] Run: docker volume create cert-expiration-365628 --label name.minikube.sigs.k8s.io=cert-expiration-365628 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:38:36.733618  215841 oci.go:103] Successfully created a docker volume cert-expiration-365628
	I1020 12:38:36.733677  215841 cli_runner.go:164] Run: docker run --rm --name cert-expiration-365628-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-365628 --entrypoint /usr/bin/test -v cert-expiration-365628:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:38:37.161122  215841 oci.go:107] Successfully prepared a docker volume cert-expiration-365628
	I1020 12:38:37.161260  215841 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:37.161286  215841 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:38:37.161372  215841 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-365628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:38:36.619206  215874 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:38:36.619480  215874 start.go:159] libmachine.API.Create for "force-systemd-flag-670413" (driver="docker")
	I1020 12:38:36.619514  215874 client.go:168] LocalClient.Create starting
	I1020 12:38:36.619620  215874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:38:36.619654  215874 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.619671  215874 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.619728  215874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:38:36.619747  215874 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.619758  215874 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.620109  215874 cli_runner.go:164] Run: docker network inspect force-systemd-flag-670413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:38:36.639598  215874 cli_runner.go:211] docker network inspect force-systemd-flag-670413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:38:36.639686  215874 network_create.go:284] running [docker network inspect force-systemd-flag-670413] to gather additional debugging logs...
	I1020 12:38:36.639707  215874 cli_runner.go:164] Run: docker network inspect force-systemd-flag-670413
	W1020 12:38:36.660895  215874 cli_runner.go:211] docker network inspect force-systemd-flag-670413 returned with exit code 1
	I1020 12:38:36.660933  215874 network_create.go:287] error running [docker network inspect force-systemd-flag-670413]: docker network inspect force-systemd-flag-670413: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-670413 not found
	I1020 12:38:36.660956  215874 network_create.go:289] output of [docker network inspect force-systemd-flag-670413]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-670413 not found
	
	** /stderr **
	I1020 12:38:36.661047  215874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:36.680671  215874 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:38:36.681347  215874 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:38:36.682015  215874 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:38:36.682432  215874 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:38:36.683120  215874 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1da2f5d78723 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8a:9b:da:cb:cc:03} reservation:<nil>}
	I1020 12:38:36.683906  215874 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b134d2f2e79a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:0e:db:e2:b0:64} reservation:<nil>}
	I1020 12:38:36.684795  215874 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f3f240}
	I1020 12:38:36.684824  215874 network_create.go:124] attempt to create docker network force-systemd-flag-670413 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1020 12:38:36.684876  215874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-670413 force-systemd-flag-670413
	I1020 12:38:36.749449  215874 network_create.go:108] docker network force-systemd-flag-670413 192.168.103.0/24 created
	I1020 12:38:36.749486  215874 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-flag-670413" container
	I1020 12:38:36.749588  215874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:38:36.768577  215874 cli_runner.go:164] Run: docker volume create force-systemd-flag-670413 --label name.minikube.sigs.k8s.io=force-systemd-flag-670413 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:38:36.792139  215874 oci.go:103] Successfully created a docker volume force-systemd-flag-670413
	I1020 12:38:36.792236  215874 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-670413-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-670413 --entrypoint /usr/bin/test -v force-systemd-flag-670413:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:38:37.247276  215874 oci.go:107] Successfully prepared a docker volume force-systemd-flag-670413
	I1020 12:38:37.247331  215874 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:37.247357  215874 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:38:37.247426  215874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-670413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:38:37.436368  216515 out.go:252] * Updating the running docker "pause-918853" container ...
	I1020 12:38:37.436425  216515 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:37.436505  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:37.460075  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:37.460433  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:37.460457  216515 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:37.612098  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-918853
	
	I1020 12:38:37.612132  216515 ubuntu.go:182] provisioning hostname "pause-918853"
	I1020 12:38:37.612192  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:37.635804  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:37.636124  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:37.636152  216515 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-918853 && echo "pause-918853" | sudo tee /etc/hostname
	I1020 12:38:37.793335  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-918853
	
	I1020 12:38:37.793412  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:37.814803  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:37.815036  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:37.815065  216515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-918853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-918853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-918853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:37.961529  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:37.961577  216515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:37.961620  216515 ubuntu.go:190] setting up certificates
	I1020 12:38:37.961641  216515 provision.go:84] configureAuth start
	I1020 12:38:37.961709  216515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918853
	I1020 12:38:37.983035  216515 provision.go:143] copyHostCerts
	I1020 12:38:37.983115  216515 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:37.983139  216515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:37.983225  216515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:37.983382  216515 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:37.983399  216515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:37.983446  216515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:37.983555  216515 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:37.983576  216515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:37.983615  216515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:37.983712  216515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.pause-918853 san=[127.0.0.1 192.168.85.2 localhost minikube pause-918853]
	I1020 12:38:38.306795  216515 provision.go:177] copyRemoteCerts
	I1020 12:38:38.306860  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:38.306915  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:38.328606  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:38.437030  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:38.456033  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 12:38:38.475266  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:38:38.495103  216515 provision.go:87] duration metric: took 533.443588ms to configureAuth
	I1020 12:38:38.495136  216515 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:38:38.495384  216515 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:38.495504  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:38.518802  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:38.519037  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:38.519055  216515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:38:43.394706  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:38:43.394734  216515 machine.go:96] duration metric: took 5.958299693s to provisionDockerMachine
	I1020 12:38:43.394751  216515 start.go:293] postStartSetup for "pause-918853" (driver="docker")
	I1020 12:38:43.394766  216515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:38:43.394857  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:38:43.394927  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.421893  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.530054  216515 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:38:43.534285  216515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:38:43.534326  216515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:38:43.534340  216515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:38:43.534401  216515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:38:43.534501  216515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:38:43.534633  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:38:43.544484  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:43.564395  216515 start.go:296] duration metric: took 169.626583ms for postStartSetup
	I1020 12:38:43.564496  216515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:43.564561  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.586123  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.687030  216515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:38:43.693622  216515 fix.go:56] duration metric: took 6.285860684s for fixHost
	I1020 12:38:43.693654  216515 start.go:83] releasing machines lock for "pause-918853", held for 6.28591413s
	I1020 12:38:43.693726  216515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918853
	I1020 12:38:43.714455  216515 ssh_runner.go:195] Run: cat /version.json
	I1020 12:38:43.714505  216515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:38:43.714515  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.714583  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.738826  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.739210  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.945865  216515 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:43.957690  216515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:38:44.024544  216515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:38:44.031152  216515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:38:44.031224  216515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:38:44.045832  216515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:38:44.045864  216515 start.go:495] detecting cgroup driver to use...
	I1020 12:38:44.045902  216515 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:38:44.045954  216515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:38:44.084963  216515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:38:44.109132  216515 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:38:44.109259  216515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:38:44.148637  216515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:38:44.166062  216515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:38:44.290260  216515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:38:44.436804  216515 docker.go:234] disabling docker service ...
	I1020 12:38:44.436875  216515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:38:44.455858  216515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:38:44.477081  216515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:38:44.665371  216515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:38:44.823983  216515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:38:44.839816  216515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:38:44.856290  216515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:38:44.856350  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.924974  216515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:38:44.925055  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.949274  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.961996  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.976646  216515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:38:44.988639  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:45.000414  216515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:45.012229  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:45.026165  216515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:38:45.037003  216515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:38:45.046683  216515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:45.180887  216515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:38:45.456384  216515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:38:45.456457  216515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:38:45.460963  216515 start.go:563] Will wait 60s for crictl version
	I1020 12:38:45.461035  216515 ssh_runner.go:195] Run: which crictl
	I1020 12:38:45.465347  216515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:38:45.492694  216515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:38:45.492883  216515 ssh_runner.go:195] Run: crio --version
	I1020 12:38:45.533491  216515 ssh_runner.go:195] Run: crio --version
	I1020 12:38:45.568345  216515 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:38:43.500437  210789 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-123936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --entrypoint /usr/bin/test -v missing-upgrade-123936:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (3.807193406s)
	I1020 12:38:43.500463  210789 oci.go:107] Successfully prepared a docker volume missing-upgrade-123936
	I1020 12:38:43.500518  210789 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1020 12:38:43.500541  210789 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:38:43.500857  210789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-123936:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:38:43.218711  215841 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-365628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.05729701s)
	I1020 12:38:43.218736  215841 kic.go:203] duration metric: took 6.057448679s to extract preloaded images to volume ...
	W1020 12:38:43.218836  215841 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:38:43.218871  215841 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:38:43.218914  215841 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:38:43.303571  215841 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-365628 --name cert-expiration-365628 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-365628 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-365628 --network cert-expiration-365628 --ip 192.168.76.2 --volume cert-expiration-365628:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:38:43.653062  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Running}}
	I1020 12:38:43.680528  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Status}}
	I1020 12:38:43.704684  215841 cli_runner.go:164] Run: docker exec cert-expiration-365628 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:38:43.765652  215841 oci.go:144] the created container "cert-expiration-365628" has a running status.
	I1020 12:38:43.765674  215841 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa...
	I1020 12:38:44.218625  215841 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:38:44.410619  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Status}}
	I1020 12:38:44.435060  215841 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:38:44.435074  215841 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-365628 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:38:44.496846  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Status}}
	I1020 12:38:44.523642  215841 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:44.523726  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:44.558745  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.559123  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:44.559136  215841 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:44.725760  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-365628
	
	I1020 12:38:44.725802  215841 ubuntu.go:182] provisioning hostname "cert-expiration-365628"
	I1020 12:38:44.725867  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:44.750269  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.750565  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:44.750577  215841 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-365628 && echo "cert-expiration-365628" | sudo tee /etc/hostname
	I1020 12:38:44.948980  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-365628
	
	I1020 12:38:44.949053  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:44.973288  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.973495  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:44.973511  215841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-365628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-365628/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-365628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:45.134845  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:45.134867  215841 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:45.134894  215841 ubuntu.go:190] setting up certificates
	I1020 12:38:45.134907  215841 provision.go:84] configureAuth start
	I1020 12:38:45.134973  215841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-365628
	I1020 12:38:45.156698  215841 provision.go:143] copyHostCerts
	I1020 12:38:45.156752  215841 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:45.156760  215841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:45.156866  215841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:45.156996  215841 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:45.157003  215841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:45.157046  215841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:45.157146  215841 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:45.157151  215841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:45.157188  215841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:45.157279  215841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-365628 san=[127.0.0.1 192.168.76.2 cert-expiration-365628 localhost minikube]
	I1020 12:38:45.855508  215841 provision.go:177] copyRemoteCerts
	I1020 12:38:45.855577  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:45.855623  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:45.877084  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:45.982630  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:46.005544  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1020 12:38:46.027755  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:38:46.046965  215841 provision.go:87] duration metric: took 912.046438ms to configureAuth
	I1020 12:38:46.046991  215841 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:38:46.047180  215841 config.go:182] Loaded profile config "cert-expiration-365628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:46.047301  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.069269  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:46.069592  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:46.069610  215841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:38:43.219485  215874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-670413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.972007092s)
	I1020 12:38:43.219514  215874 kic.go:203] duration metric: took 5.972153427s to extract preloaded images to volume ...
	W1020 12:38:43.219604  215874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:38:43.219648  215874 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:38:43.219697  215874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:38:43.302911  215874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-670413 --name force-systemd-flag-670413 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-670413 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-670413 --network force-systemd-flag-670413 --ip 192.168.103.2 --volume force-systemd-flag-670413:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:38:43.788903  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Running}}
	I1020 12:38:43.815067  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Status}}
	I1020 12:38:43.845670  215874 cli_runner.go:164] Run: docker exec force-systemd-flag-670413 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:38:43.908163  215874 oci.go:144] the created container "force-systemd-flag-670413" has a running status.
	I1020 12:38:43.908194  215874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa...
	I1020 12:38:44.393836  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1020 12:38:44.393889  215874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:38:44.464792  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Status}}
	I1020 12:38:44.497632  215874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:38:44.497666  215874 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-670413 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:38:44.572041  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Status}}
	I1020 12:38:44.597538  215874 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:44.597742  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:44.626658  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.627600  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:44.627622  215874 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:44.784485  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-670413
	
	I1020 12:38:44.784511  215874 ubuntu.go:182] provisioning hostname "force-systemd-flag-670413"
	I1020 12:38:44.784575  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:44.812685  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.813014  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:44.813036  215874 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-670413 && echo "force-systemd-flag-670413" | sudo tee /etc/hostname
	I1020 12:38:44.976328  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-670413
	
	I1020 12:38:44.976407  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.000411  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:45.000717  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:45.000754  215874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-670413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-670413/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-670413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:45.154798  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:45.154842  215874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:45.154897  215874 ubuntu.go:190] setting up certificates
	I1020 12:38:45.154909  215874 provision.go:84] configureAuth start
	I1020 12:38:45.154979  215874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-670413
	I1020 12:38:45.175931  215874 provision.go:143] copyHostCerts
	I1020 12:38:45.175980  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:45.176009  215874 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:45.176016  215874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:45.176081  215874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:45.176175  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:45.176199  215874 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:45.176206  215874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:45.176242  215874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:45.176313  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:45.176341  215874 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:45.176350  215874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:45.176377  215874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:45.176448  215874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-670413 san=[127.0.0.1 192.168.103.2 force-systemd-flag-670413 localhost minikube]
	I1020 12:38:45.691704  215874 provision.go:177] copyRemoteCerts
	I1020 12:38:45.691783  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:45.691828  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.713247  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:45.816413  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1020 12:38:45.816470  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:45.846408  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1020 12:38:45.846477  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1020 12:38:45.866696  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1020 12:38:45.866766  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:38:45.888351  215874 provision.go:87] duration metric: took 733.420159ms to configureAuth
	I1020 12:38:45.888383  215874 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:38:45.888556  215874 config.go:182] Loaded profile config "force-systemd-flag-670413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:45.888664  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.911332  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:45.911523  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:45.911539  215874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:38:46.191701  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:38:46.191729  215874 machine.go:96] duration metric: took 1.594109749s to provisionDockerMachine
	I1020 12:38:46.191749  215874 client.go:171] duration metric: took 9.572227725s to LocalClient.Create
	I1020 12:38:46.191791  215874 start.go:167] duration metric: took 9.572310561s to libmachine.API.Create "force-systemd-flag-670413"
	I1020 12:38:46.191807  215874 start.go:293] postStartSetup for "force-systemd-flag-670413" (driver="docker")
	I1020 12:38:46.191822  215874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:38:46.191890  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:38:46.191939  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:46.214637  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.326050  215874 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:38:46.330392  215874 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:38:46.330427  215874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:38:46.330443  215874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:38:46.330499  215874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:38:46.330586  215874 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:38:46.330600  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> /etc/ssl/certs/145922.pem
	I1020 12:38:46.330688  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:38:46.339829  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:46.365970  215874 start.go:296] duration metric: took 174.145915ms for postStartSetup
	I1020 12:38:46.366426  215874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-670413
	I1020 12:38:46.390475  215874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/config.json ...
	I1020 12:38:46.390834  215874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:46.390894  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.572935  216515 cli_runner.go:164] Run: docker network inspect pause-918853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:45.592821  216515 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:38:45.598368  216515 kubeadm.go:883] updating cluster {Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:38:45.598537  216515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:45.598604  216515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:45.637935  216515 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:45.637966  216515 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:38:45.638020  216515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:45.670288  216515 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:45.670319  216515 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:38:45.670328  216515 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:38:45.670470  216515 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-918853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:38:45.670561  216515 ssh_runner.go:195] Run: crio config
	I1020 12:38:45.733472  216515 cni.go:84] Creating CNI manager for ""
	I1020 12:38:45.733493  216515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:45.733509  216515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:38:45.733537  216515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-918853 NodeName:pause-918853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:38:45.733679  216515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-918853"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:38:45.733741  216515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:38:45.743225  216515 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:38:45.743292  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:38:45.752226  216515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1020 12:38:45.767150  216515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:38:45.780437  216515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1020 12:38:45.794705  216515 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:38:45.798793  216515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:45.937312  216515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:45.953170  216515 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853 for IP: 192.168.85.2
	I1020 12:38:45.953197  216515 certs.go:195] generating shared ca certs ...
	I1020 12:38:45.953217  216515 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:45.953401  216515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:38:45.953463  216515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:38:45.953478  216515 certs.go:257] generating profile certs ...
	I1020 12:38:45.953586  216515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key
	I1020 12:38:45.953671  216515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/apiserver.key.44a82604
	I1020 12:38:45.953740  216515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/proxy-client.key
	I1020 12:38:45.953936  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:38:45.953984  216515 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:38:45.954008  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:38:45.954041  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:38:45.954078  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:38:45.954114  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:38:45.954177  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:45.955006  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:38:45.977336  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:38:45.999396  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:38:46.019446  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:38:46.040025  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 12:38:46.059673  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:38:46.082343  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:38:46.102835  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:38:46.126376  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:38:46.148484  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:38:46.171011  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:38:46.193010  216515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:38:46.209854  216515 ssh_runner.go:195] Run: openssl version
	I1020 12:38:46.217016  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:38:46.227138  216515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:38:46.232076  216515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:38:46.232145  216515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:38:46.273043  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:38:46.282377  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:38:46.293030  216515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:46.298563  216515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:46.298627  216515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:46.346033  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:38:46.357246  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:38:46.368645  216515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:38:46.373534  216515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:38:46.373601  216515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:38:46.421875  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:38:46.431665  216515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:38:46.436549  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:38:46.480685  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:38:46.530199  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:38:46.580149  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:38:46.623657  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:38:46.661723  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:38:46.714359  216515 kubeadm.go:400] StartCluster: {Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:46.714516  216515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:38:46.714589  216515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:38:46.752045  216515 cri.go:89] found id: "72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86"
	I1020 12:38:46.752072  216515 cri.go:89] found id: "c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8"
	I1020 12:38:46.752077  216515 cri.go:89] found id: "8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1"
	I1020 12:38:46.752089  216515 cri.go:89] found id: "ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013"
	I1020 12:38:46.752094  216515 cri.go:89] found id: "84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b"
	I1020 12:38:46.752098  216515 cri.go:89] found id: "e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96"
	I1020 12:38:46.752102  216515 cri.go:89] found id: "9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c"
	I1020 12:38:46.752106  216515 cri.go:89] found id: ""
	I1020 12:38:46.752153  216515 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:38:46.765544  216515 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:38:46Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:38:46.765616  216515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:38:46.774473  216515 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:38:46.774495  216515 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:38:46.774551  216515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:38:46.785315  216515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:38:46.785988  216515 kubeconfig.go:125] found "pause-918853" server: "https://192.168.85.2:8443"
	I1020 12:38:46.786727  216515 kapi.go:59] client config for pause-918853: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key", CAFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:38:46.787148  216515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1020 12:38:46.787162  216515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1020 12:38:46.787167  216515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1020 12:38:46.787170  216515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1020 12:38:46.787174  216515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1020 12:38:46.787539  216515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:38:46.796535  216515 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 12:38:46.796578  216515 kubeadm.go:601] duration metric: took 22.077249ms to restartPrimaryControlPlane
	I1020 12:38:46.796590  216515 kubeadm.go:402] duration metric: took 82.241896ms to StartCluster
	I1020 12:38:46.796623  216515 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:46.796696  216515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:38:46.797952  216515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:46.798230  216515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:38:46.798288  216515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:38:46.798488  216515 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:46.801114  216515 out.go:179] * Verifying Kubernetes components...
	I1020 12:38:46.801118  216515 out.go:179] * Enabled addons: 
	I1020 12:38:46.802682  216515 addons.go:514] duration metric: took 4.397627ms for enable addons: enabled=[]
	I1020 12:38:46.802726  216515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:46.944829  216515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:46.967350  216515 node_ready.go:35] waiting up to 6m0s for node "pause-918853" to be "Ready" ...
	I1020 12:38:46.976155  216515 node_ready.go:49] node "pause-918853" is "Ready"
	I1020 12:38:46.976179  216515 node_ready.go:38] duration metric: took 8.792396ms for node "pause-918853" to be "Ready" ...
	I1020 12:38:46.976192  216515 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:38:46.976239  216515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:38:46.988996  216515 api_server.go:72] duration metric: took 190.725956ms to wait for apiserver process to appear ...
	I1020 12:38:46.989027  216515 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:38:46.989052  216515 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:38:46.994398  216515 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:38:46.995831  216515 api_server.go:141] control plane version: v1.34.1
	I1020 12:38:46.995862  216515 api_server.go:131] duration metric: took 6.825551ms to wait for apiserver health ...
	I1020 12:38:46.995874  216515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:38:47.000266  216515 system_pods.go:59] 7 kube-system pods found
	I1020 12:38:47.000314  216515 system_pods.go:61] "coredns-66bc5c9577-wnfvn" [456a6380-cb4a-4846-be4c-30bba34b7db3] Running
	I1020 12:38:47.000324  216515 system_pods.go:61] "etcd-pause-918853" [4d89e9c8-70a8-460b-a4ca-b0df9da06427] Running
	I1020 12:38:47.000330  216515 system_pods.go:61] "kindnet-pvqlr" [3bf6c55d-197d-4297-8c7e-7a7032090942] Running
	I1020 12:38:47.000335  216515 system_pods.go:61] "kube-apiserver-pause-918853" [473c0535-ca0a-4385-9403-cea2d0656193] Running
	I1020 12:38:47.000353  216515 system_pods.go:61] "kube-controller-manager-pause-918853" [cfb03e63-d5e2-4aad-8270-7b10ba695e5f] Running
	I1020 12:38:47.000361  216515 system_pods.go:61] "kube-proxy-9md6s" [7ab94d55-c409-4d18-8205-59568b5cfb7a] Running
	I1020 12:38:47.000366  216515 system_pods.go:61] "kube-scheduler-pause-918853" [2f5f0af8-d9ae-41a0-8c75-3d8c7b06a48a] Running
	I1020 12:38:47.000374  216515 system_pods.go:74] duration metric: took 4.493144ms to wait for pod list to return data ...
	I1020 12:38:47.000388  216515 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:38:47.004527  216515 default_sa.go:45] found service account: "default"
	I1020 12:38:47.004551  216515 default_sa.go:55] duration metric: took 4.155159ms for default service account to be created ...
	I1020 12:38:47.004563  216515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:38:47.007615  216515 system_pods.go:86] 7 kube-system pods found
	I1020 12:38:47.007653  216515 system_pods.go:89] "coredns-66bc5c9577-wnfvn" [456a6380-cb4a-4846-be4c-30bba34b7db3] Running
	I1020 12:38:47.007662  216515 system_pods.go:89] "etcd-pause-918853" [4d89e9c8-70a8-460b-a4ca-b0df9da06427] Running
	I1020 12:38:47.007668  216515 system_pods.go:89] "kindnet-pvqlr" [3bf6c55d-197d-4297-8c7e-7a7032090942] Running
	I1020 12:38:47.007674  216515 system_pods.go:89] "kube-apiserver-pause-918853" [473c0535-ca0a-4385-9403-cea2d0656193] Running
	I1020 12:38:47.007680  216515 system_pods.go:89] "kube-controller-manager-pause-918853" [cfb03e63-d5e2-4aad-8270-7b10ba695e5f] Running
	I1020 12:38:47.007686  216515 system_pods.go:89] "kube-proxy-9md6s" [7ab94d55-c409-4d18-8205-59568b5cfb7a] Running
	I1020 12:38:47.007693  216515 system_pods.go:89] "kube-scheduler-pause-918853" [2f5f0af8-d9ae-41a0-8c75-3d8c7b06a48a] Running
	I1020 12:38:47.007705  216515 system_pods.go:126] duration metric: took 3.135625ms to wait for k8s-apps to be running ...
	I1020 12:38:47.007717  216515 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:38:47.007763  216515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:47.023158  216515 system_svc.go:56] duration metric: took 15.430738ms WaitForService to wait for kubelet
	I1020 12:38:47.023195  216515 kubeadm.go:586] duration metric: took 224.931257ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:38:47.023218  216515 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:38:47.026679  216515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:38:47.026710  216515 node_conditions.go:123] node cpu capacity is 8
	I1020 12:38:47.026728  216515 node_conditions.go:105] duration metric: took 3.504646ms to run NodePressure ...
	I1020 12:38:47.026741  216515 start.go:241] waiting for startup goroutines ...
	I1020 12:38:47.026750  216515 start.go:246] waiting for cluster config update ...
	I1020 12:38:47.026760  216515 start.go:255] writing updated cluster config ...
	I1020 12:38:47.027128  216515 ssh_runner.go:195] Run: rm -f paused
	I1020 12:38:47.031140  216515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:38:47.031998  216515 kapi.go:59] client config for pause-918853: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key", CAFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:38:47.035353  216515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wnfvn" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.040523  216515 pod_ready.go:94] pod "coredns-66bc5c9577-wnfvn" is "Ready"
	I1020 12:38:47.040544  216515 pod_ready.go:86] duration metric: took 5.17249ms for pod "coredns-66bc5c9577-wnfvn" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.042835  216515 pod_ready.go:83] waiting for pod "etcd-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.047139  216515 pod_ready.go:94] pod "etcd-pause-918853" is "Ready"
	I1020 12:38:47.047165  216515 pod_ready.go:86] duration metric: took 4.30683ms for pod "etcd-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.049481  216515 pod_ready.go:83] waiting for pod "kube-apiserver-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.054237  216515 pod_ready.go:94] pod "kube-apiserver-pause-918853" is "Ready"
	I1020 12:38:47.054261  216515 pod_ready.go:86] duration metric: took 4.752013ms for pod "kube-apiserver-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.056582  216515 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.439578  216515 pod_ready.go:94] pod "kube-controller-manager-pause-918853" is "Ready"
	I1020 12:38:47.439602  216515 pod_ready.go:86] duration metric: took 382.99886ms for pod "kube-controller-manager-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.636219  216515 pod_ready.go:83] waiting for pod "kube-proxy-9md6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.035585  216515 pod_ready.go:94] pod "kube-proxy-9md6s" is "Ready"
	I1020 12:38:48.035609  216515 pod_ready.go:86] duration metric: took 399.368026ms for pod "kube-proxy-9md6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.236170  216515 pod_ready.go:83] waiting for pod "kube-scheduler-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.635349  216515 pod_ready.go:94] pod "kube-scheduler-pause-918853" is "Ready"
	I1020 12:38:48.635375  216515 pod_ready.go:86] duration metric: took 399.180697ms for pod "kube-scheduler-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.635386  216515 pod_ready.go:40] duration metric: took 1.604209536s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:38:48.681854  216515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:38:48.791968  216515 out.go:179] * Done! kubectl is now configured to use "pause-918853" cluster and "default" namespace by default
	I1020 12:38:46.354827  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:38:46.354848  215841 machine.go:96] duration metric: took 1.831192867s to provisionDockerMachine
	I1020 12:38:46.354859  215841 client.go:171] duration metric: took 9.786313178s to LocalClient.Create
	I1020 12:38:46.354880  215841 start.go:167] duration metric: took 9.786377291s to libmachine.API.Create "cert-expiration-365628"
	I1020 12:38:46.354888  215841 start.go:293] postStartSetup for "cert-expiration-365628" (driver="docker")
	I1020 12:38:46.354900  215841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:38:46.354981  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:38:46.355026  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.379997  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.487863  215841 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:38:46.491927  215841 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:38:46.491952  215841 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:38:46.491964  215841 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:38:46.492032  215841 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:38:46.492151  215841 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:38:46.492277  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:38:46.500886  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:46.524757  215841 start.go:296] duration metric: took 169.855258ms for postStartSetup
	I1020 12:38:46.525253  215841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-365628
	I1020 12:38:46.547734  215841 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/config.json ...
	I1020 12:38:46.548094  215841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:46.548140  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.570631  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.677196  215841 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:38:46.683234  215841 start.go:128] duration metric: took 10.118415299s to createHost
	I1020 12:38:46.683254  215841 start.go:83] releasing machines lock for "cert-expiration-365628", held for 10.118544604s
	I1020 12:38:46.683327  215841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-365628
	I1020 12:38:46.704393  215841 ssh_runner.go:195] Run: cat /version.json
	I1020 12:38:46.704427  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.704508  215841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:38:46.704578  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.725680  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.727099  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.904970  215841 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:46.913669  215841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:38:46.964964  215841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:38:46.970674  215841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:38:46.970731  215841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:38:47.004318  215841 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:38:47.004330  215841 start.go:495] detecting cgroup driver to use...
	I1020 12:38:47.004364  215841 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:38:47.004407  215841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:38:47.024654  215841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:38:47.040690  215841 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:38:47.040734  215841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:38:47.062030  215841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:38:47.085942  215841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:38:47.186718  215841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:38:47.299460  215841 docker.go:234] disabling docker service ...
	I1020 12:38:47.299509  215841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:38:47.318691  215841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:38:47.332867  215841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:38:47.446888  215841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:38:47.532073  215841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:38:47.544858  215841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:38:47.559880  215841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:38:47.559934  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.612228  215841 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:38:47.612283  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.622029  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.631503  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.705380  215841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:38:47.714162  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.762944  215841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.896078  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:48.019494  215841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:38:48.028145  215841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:38:48.036820  215841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:48.123428  215841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:38:49.099330  215841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:38:49.099419  215841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:38:49.106492  215841 start.go:563] Will wait 60s for crictl version
	I1020 12:38:49.106560  215841 ssh_runner.go:195] Run: which crictl
	I1020 12:38:49.113298  215841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:38:49.144648  215841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:38:49.144742  215841 ssh_runner.go:195] Run: crio --version
	I1020 12:38:49.179879  215841 ssh_runner.go:195] Run: crio --version
	I1020 12:38:46.411891  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.517411  215874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:38:46.523012  215874 start.go:128] duration metric: took 9.905815707s to createHost
	I1020 12:38:46.523044  215874 start.go:83] releasing machines lock for "force-systemd-flag-670413", held for 9.905997323s
	I1020 12:38:46.523119  215874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-670413
	I1020 12:38:46.544420  215874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:38:46.544466  215874 ssh_runner.go:195] Run: cat /version.json
	I1020 12:38:46.544504  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:46.544523  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:46.566569  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.569700  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.743853  215874 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:46.752201  215874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:38:46.796728  215874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:38:46.802751  215874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:38:46.802830  215874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:38:46.833693  215874 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:38:46.833739  215874 start.go:495] detecting cgroup driver to use...
	I1020 12:38:46.833755  215874 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1020 12:38:46.833825  215874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:38:46.860656  215874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:38:46.874216  215874 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:38:46.874277  215874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:38:46.894288  215874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:38:46.917602  215874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:38:47.032940  215874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:38:47.150282  215874 docker.go:234] disabling docker service ...
	I1020 12:38:47.150356  215874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:38:47.169385  215874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:38:47.185367  215874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:38:47.298927  215874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:38:47.395851  215874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:38:47.409756  215874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:38:47.424202  215874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:38:47.424296  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.439824  215874 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:38:47.439882  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.505005  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.568947  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.600007  215874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:38:47.609505  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.619610  215874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.705434  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.762941  215874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:38:47.772019  215874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:38:47.779566  215874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:47.859586  215874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:38:49.093904  215874 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.234279705s)
	I1020 12:38:49.093940  215874 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:38:49.094069  215874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:38:49.100668  215874 start.go:563] Will wait 60s for crictl version
	I1020 12:38:49.100723  215874 ssh_runner.go:195] Run: which crictl
	I1020 12:38:49.106157  215874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:38:49.146029  215874 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:38:49.146246  215874 ssh_runner.go:195] Run: crio --version
	I1020 12:38:49.182539  215874 ssh_runner.go:195] Run: crio --version
	I1020 12:38:49.219194  215841 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:38:49.220802  215874 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:38:48.990436  210789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-123936:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.489501875s)
	I1020 12:38:48.990480  210789 kic.go:203] duration metric: took 5.4899238s to extract preloaded images to volume ...
	W1020 12:38:48.990574  210789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:38:48.990610  210789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:38:48.990657  210789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:38:49.066575  210789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-123936 --name missing-upgrade-123936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-123936 --network missing-upgrade-123936 --ip 192.168.94.2 --volume missing-upgrade-123936:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1020 12:38:49.408836  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Running}}
	I1020 12:38:49.430320  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	I1020 12:38:49.452884  210789 cli_runner.go:164] Run: docker exec missing-upgrade-123936 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:38:49.505343  210789 oci.go:144] the created container "missing-upgrade-123936" has a running status.
	I1020 12:38:49.505379  210789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/missing-upgrade-123936/id_rsa...
	I1020 12:38:49.667457  210789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/missing-upgrade-123936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:38:49.701914  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	I1020 12:38:49.729955  210789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:38:49.729980  210789 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-123936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:38:49.800378  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	I1020 12:38:49.825389  210789 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:49.825513  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:49.846904  210789 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:49.847225  210789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1020 12:38:49.847247  210789 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:49.982915  210789 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-123936
	
	I1020 12:38:49.982939  210789 ubuntu.go:182] provisioning hostname "missing-upgrade-123936"
	I1020 12:38:49.983010  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:50.004554  210789 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:50.004857  210789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1020 12:38:50.004876  210789 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-123936 && echo "missing-upgrade-123936" | sudo tee /etc/hostname
	I1020 12:38:50.144001  210789 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-123936
	
	I1020 12:38:50.144087  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:50.164505  210789 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:50.164789  210789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1020 12:38:50.164830  210789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-123936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-123936/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-123936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:50.284484  210789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:50.284534  210789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:50.284578  210789 ubuntu.go:190] setting up certificates
	I1020 12:38:50.284591  210789 provision.go:84] configureAuth start
	I1020 12:38:50.284652  210789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-123936
	I1020 12:38:50.303830  210789 provision.go:143] copyHostCerts
	I1020 12:38:50.303900  210789 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:50.303915  210789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:50.303993  210789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:50.304107  210789 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:50.304118  210789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:50.304161  210789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:50.304251  210789 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:50.304261  210789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:50.304299  210789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:50.304375  210789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-123936 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-123936]
	I1020 12:38:50.611461  210789 provision.go:177] copyRemoteCerts
	I1020 12:38:50.611597  210789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:50.611645  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:50.633842  210789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/missing-upgrade-123936/id_rsa Username:docker}
	I1020 12:38:50.723870  210789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:49.221763  215841 cli_runner.go:164] Run: docker network inspect cert-expiration-365628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:49.246591  215841 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 12:38:49.251314  215841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:38:49.265641  215841 kubeadm.go:883] updating cluster {Name:cert-expiration-365628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-365628 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:38:49.265745  215841 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:49.265810  215841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:49.306574  215841 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:49.306585  215841 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:38:49.306628  215841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:49.338798  215841 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:49.338811  215841 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:38:49.338818  215841 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 12:38:49.338901  215841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-365628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-365628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:38:49.338969  215841 ssh_runner.go:195] Run: crio config
	I1020 12:38:49.411495  215841 cni.go:84] Creating CNI manager for ""
	I1020 12:38:49.411509  215841 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:49.411529  215841 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:38:49.411555  215841 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-365628 NodeName:cert-expiration-365628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:38:49.411685  215841 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-365628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:38:49.411736  215841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:38:49.423020  215841 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:38:49.423102  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:38:49.434854  215841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1020 12:38:49.451433  215841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:38:49.470977  215841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1020 12:38:49.487931  215841 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:38:49.492659  215841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:38:49.506444  215841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:49.620183  215841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:49.638271  215841 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628 for IP: 192.168.76.2
	I1020 12:38:49.638281  215841 certs.go:195] generating shared ca certs ...
	I1020 12:38:49.638298  215841 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:49.638425  215841 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:38:49.638549  215841 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:38:49.638557  215841 certs.go:257] generating profile certs ...
	I1020 12:38:49.638613  215841 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.key
	I1020 12:38:49.638632  215841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.crt with IP's: []
	I1020 12:38:50.037186  215841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.crt ...
	I1020 12:38:50.037200  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.crt: {Name:mk88fa910f6396b666b21ba54195fc932bfe6023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.037354  215841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.key ...
	I1020 12:38:50.037361  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.key: {Name:mk809d75f9e9126fb1947c3690d9a240b4eae2a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.037441  215841 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238
	I1020 12:38:50.037451  215841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1020 12:38:50.323206  215841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238 ...
	I1020 12:38:50.323221  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238: {Name:mkec34f54537cb16624e5b7414e45bec2703ea6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.323393  215841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238 ...
	I1020 12:38:50.323415  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238: {Name:mk3d60da0cb52b99fb41481c738eccadba6f746b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.323490  215841 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt
	I1020 12:38:50.323575  215841 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key
	I1020 12:38:50.323629  215841 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key
	I1020 12:38:50.323639  215841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt with IP's: []
	I1020 12:38:50.419917  215841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt ...
	I1020 12:38:50.419934  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt: {Name:mka483e947e1f4ab237e0ac8828cedb4fba55513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.420100  215841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key ...
	I1020 12:38:50.420107  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key: {Name:mk72e3509b8c9d8468f70f839997baed0b9f638c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.420276  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:38:50.420305  215841 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:38:50.420311  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:38:50.420332  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:38:50.420356  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:38:50.420380  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:38:50.420433  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:50.421034  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:38:50.439402  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:38:50.456850  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:38:50.474471  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:38:50.492236  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1020 12:38:50.510755  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:38:50.528158  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:38:50.545252  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:38:50.563038  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:38:50.585738  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:38:50.610060  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:38:50.631474  215841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:38:50.645232  215841 ssh_runner.go:195] Run: openssl version
	I1020 12:38:50.651614  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:38:50.660810  215841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:38:50.664999  215841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:38:50.665046  215841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:38:50.700132  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:38:50.710233  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:38:50.721395  215841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:38:50.726887  215841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:38:50.726937  215841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:38:50.770009  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:38:50.779185  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:38:50.789209  215841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:50.793273  215841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:50.793319  215841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:50.838450  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:38:50.848371  215841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:38:50.852838  215841 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:38:50.852893  215841 kubeadm.go:400] StartCluster: {Name:cert-expiration-365628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-365628 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:50.852972  215841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:38:50.853025  215841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:38:50.889271  215841 cri.go:89] found id: ""
	I1020 12:38:50.889328  215841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:38:50.901390  215841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:38:50.911601  215841 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:38:50.911640  215841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:38:50.920934  215841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:38:50.920944  215841 kubeadm.go:157] found existing configuration files:
	
	I1020 12:38:50.920990  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:38:50.929123  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:38:50.929170  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:38:50.937478  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:38:50.945613  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:38:50.945661  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:38:50.953547  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:38:50.962473  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:38:50.962526  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:38:50.971124  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:38:50.979663  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:38:50.979715  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:38:50.987591  215841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:38:51.037328  215841 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:38:51.037391  215841 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:38:51.069722  215841 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:38:51.069812  215841 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:38:51.069843  215841 kubeadm.go:318] OS: Linux
	I1020 12:38:51.069889  215841 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:38:51.069935  215841 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:38:51.070002  215841 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:38:51.070063  215841 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:38:51.070103  215841 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:38:51.070167  215841 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:38:51.070238  215841 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:38:51.070315  215841 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:38:51.166444  215841 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:38:51.166560  215841 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:38:51.166989  215841 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:38:51.177906  215841 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.385273711Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.3861969Z" level=info msg="Conmon does support the --sync option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.386222996Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.386244082Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.387029823Z" level=info msg="Conmon does support the --sync option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.387052157Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.391544344Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.391572397Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.392137484Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.392606422Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.392667654Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.399015633Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.447214019Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-wnfvn Namespace:kube-system ID:3fc81582749368af7068e357f0baf0831f08bc049fdbeb81a77e6e49757ebd1f UID:456a6380-cb4a-4846-be4c-30bba34b7db3 NetNS:/var/run/netns/68bba9c1-7708-480c-9316-a7b7cc090194 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a080}] Aliases:map[]}"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.447526322Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-wnfvn for CNI network kindnet (type=ptp)"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448083607Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448115021Z" level=info msg="Starting seccomp notifier watcher"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448175726Z" level=info msg="Create NRI interface"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448291543Z" level=info msg="built-in NRI default validator is disabled"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448302133Z" level=info msg="runtime interface created"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448314981Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448322967Z" level=info msg="runtime interface starting up..."
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448330642Z" level=info msg="starting plugins..."
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448359954Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448766569Z" level=info msg="No systemd watchdog enabled"
	Oct 20 12:38:45 pause-918853 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	72a3b202a7641       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   0                   3fc8158274936       coredns-66bc5c9577-wnfvn               kube-system
	c9e90a7b75b16       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   28 seconds ago      Running             kube-proxy                0                   a8d026ff182de       kube-proxy-9md6s                       kube-system
	8845cf52f71fb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   28 seconds ago      Running             kindnet-cni               0                   f57d3471af5f7       kindnet-pvqlr                          kube-system
	ee48e32b2f57c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   39 seconds ago      Running             etcd                      0                   6d2c55e6a2370       etcd-pause-918853                      kube-system
	84c4c4d5781d5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   39 seconds ago      Running             kube-apiserver            0                   09d05aae12bf9       kube-apiserver-pause-918853            kube-system
	e0e2d9777d82f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   39 seconds ago      Running             kube-scheduler            0                   9c59c659ea945       kube-scheduler-pause-918853            kube-system
	9f83eedefaca2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   39 seconds ago      Running             kube-controller-manager   0                   9bf6d6bab963b       kube-controller-manager-pause-918853   kube-system
	
	
	==> coredns [72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54848 - 49809 "HINFO IN 8001088817092132921.4386291470931721400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020392259s
	
	
	==> describe nodes <==
	Name:               pause-918853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-918853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=pause-918853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_38_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:38:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-918853
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:38:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-918853
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4fc86615-9ae4-4756-b290-33e6674fa76f
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wnfvn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-918853                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-pvqlr                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-918853             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-918853    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-9md6s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-918853             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-918853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-918853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-918853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node pause-918853 event: Registered Node pause-918853 in Controller
	  Normal  NodeReady                18s   kubelet          Node pause-918853 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013] <==
	{"level":"warn","ts":"2025-10-20T12:38:13.908725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.917899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.931870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.938922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.945954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.952811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.961855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.969528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.975866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.981962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.988252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.002165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.010487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.021511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.028520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.035182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.048229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.054701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.069622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.078577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.086797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.138229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:41.445746Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.344722ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502342613536 > lease_revoke:<id:06ed9a01a09639a9>","response":"size:28"}
	{"level":"warn","ts":"2025-10-20T12:38:48.017937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.715567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-9md6s\" limit:1 ","response":"range_response_count:1 size:5033"}
	{"level":"info","ts":"2025-10-20T12:38:48.018005Z","caller":"traceutil/trace.go:172","msg":"trace[61134928] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-9md6s; range_end:; response_count:1; response_revision:406; }","duration":"183.827672ms","start":"2025-10-20T12:38:47.834163Z","end":"2025-10-20T12:38:48.017991Z","steps":["trace[61134928] 'range keys from in-memory index tree'  (duration: 183.541493ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:38:52 up  1:21,  0 user,  load average: 5.58, 2.99, 1.69
	Linux pause-918853 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1] <==
	I1020 12:38:23.374021       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:38:23.374354       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:38:23.374513       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:38:23.374532       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:38:23.374562       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:38:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:38:23.579699       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:38:23.579736       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:38:23.579749       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:38:23.775517       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:38:23.882509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:38:23.882537       1 metrics.go:72] Registering metrics
	I1020 12:38:23.882584       1 controller.go:711] "Syncing nftables rules"
	I1020 12:38:33.583859       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:38:33.583939       1 main.go:301] handling current node
	I1020 12:38:43.585870       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:38:43.585913       1 main.go:301] handling current node
	
	
	==> kube-apiserver [84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b] <==
	I1020 12:38:14.710169       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1020 12:38:14.710828       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:38:14.713203       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:38:14.713482       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:14.713645       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 12:38:14.718257       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:14.718479       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:38:14.883843       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:38:15.587707       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:38:15.591972       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:38:15.592003       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:38:16.125581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:38:16.166162       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:38:16.292061       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:38:16.299809       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1020 12:38:16.301234       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:38:16.306895       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:38:16.620359       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:38:17.072165       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:38:17.082559       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:38:17.090865       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:38:22.472852       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:38:22.584524       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:22.589614       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:22.688192       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c] <==
	I1020 12:38:21.619639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:38:21.619657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:38:21.619667       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:38:21.619848       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:38:21.621486       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:38:21.621520       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 12:38:21.621529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:38:21.621552       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 12:38:21.621568       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:38:21.621595       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:38:21.621613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:38:21.621625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:38:21.621598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:38:21.621628       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:38:21.621614       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:38:21.622071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:38:21.622959       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 12:38:21.623086       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 12:38:21.624269       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 12:38:21.628543       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:38:21.629744       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:38:21.630951       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:38:21.637154       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:38:21.644617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:38:36.572052       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8] <==
	I1020 12:38:23.122231       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:38:23.178280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:38:23.279132       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:38:23.279176       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:38:23.279304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:38:23.299351       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:38:23.300409       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:38:23.306218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:38:23.306613       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:38:23.306630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:38:23.307914       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:38:23.307941       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:38:23.307951       1 config.go:200] "Starting service config controller"
	I1020 12:38:23.307970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:38:23.307994       1 config.go:309] "Starting node config controller"
	I1020 12:38:23.308003       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:38:23.308180       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:38:23.308194       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:38:23.408388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:38:23.408395       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:38:23.408406       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:38:23.408504       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96] <==
	E1020 12:38:14.645248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:38:14.645271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:38:14.645304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:38:14.645334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:38:14.645337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:38:14.645404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:38:14.645439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:38:14.645517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:38:14.645583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:38:14.645714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:38:14.645737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:38:15.490498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:38:15.500040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:38:15.500959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:38:15.554107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:38:15.634302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:38:15.663351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:38:15.663379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:38:15.704016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:38:15.902726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:38:15.938294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:38:15.938399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:38:15.948278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:38:16.086939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1020 12:38:18.041521       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908715    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908816    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908854    1358 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908866    1358 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.979194    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.979258    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.979276    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:43 pause-918853 kubelet[1358]: W1020 12:38:43.195493    1358 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 20 12:38:43 pause-918853 kubelet[1358]: E1020 12:38:43.980292    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:43 pause-918853 kubelet[1358]: E1020 12:38:43.980359    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:43 pause-918853 kubelet[1358]: E1020 12:38:43.980377    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908510    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908591    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908614    1358 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908632    1358 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.980599    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.980675    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.980696    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:45 pause-918853 kubelet[1358]: E1020 12:38:45.981630    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:45 pause-918853 kubelet[1358]: E1020 12:38:45.981690    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:45 pause-918853 kubelet[1358]: E1020 12:38:45.981710    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:49 pause-918853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:38:49 pause-918853 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:38:49 pause-918853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:38:49 pause-918853 systemd[1]: kubelet.service: Consumed 1.383s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-918853 -n pause-918853
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-918853 -n pause-918853: exit status 2 (357.634322ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-918853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-918853
helpers_test.go:243: (dbg) docker inspect pause-918853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f",
	        "Created": "2025-10-20T12:38:00.784742664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:38:01.274033494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/hostname",
	        "HostsPath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/hosts",
	        "LogPath": "/var/lib/docker/containers/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f/045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f-json.log",
	        "Name": "/pause-918853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-918853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-918853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "045b9ae9e1735faa757c74262088608edce017c749cb5ca27a3b60c236f63e7f",
	                "LowerDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8677a9bae922cafaac6e4f5ad8aa0494f4d88bee5bf9bd58385fc2b7ed5faeef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-918853",
	                "Source": "/var/lib/docker/volumes/pause-918853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-918853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-918853",
	                "name.minikube.sigs.k8s.io": "pause-918853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21c79fe1b9eab44c04189d745b529f4063130db476e56f2a6f80f010d9ce34dc",
	            "SandboxKey": "/var/run/docker/netns/21c79fe1b9ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-918853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:81:4a:34:8f:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1da2f5d7872345588dd336e9fa2645feab1c8f2b3c0bf2980c7ba8e6bcbd92e5",
	                    "EndpointID": "cb7db1c9949cd8e19e204d738707345117b9ce2d5c91e1f55a97e950dfcd4cd8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-918853",
	                        "045b9ae9e173"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-918853 -n pause-918853
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-918853 -n pause-918853: exit status 2 (330.928051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-918853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-918853 logs -n 25: (1.051954546s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-312375 sudo journalctl -xeu kubelet --all --full --no-pager                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat docker --no-pager                                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/docker/daemon.json                                                                           │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo docker system info                                                                                    │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cri-dockerd --version                                                                                 │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo cat /etc/containerd/config.toml                                                                       │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo containerd config dump                                                                                │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat crio --no-pager                                                                         │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo crio config                                                                                           │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p cilium-312375                                                                                                            │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-365628    │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ start   │ -p force-systemd-flag-670413 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ start   │ -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ pause   │ -p pause-918853 --alsologtostderr -v=5                                                                                      │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:38:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:38:37.168921  216515 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:38:37.169239  216515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:37.169251  216515 out.go:374] Setting ErrFile to fd 2...
	I1020 12:38:37.169257  216515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:37.169499  216515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:38:37.169959  216515 out.go:368] Setting JSON to false
	I1020 12:38:37.171048  216515 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4866,"bootTime":1760959051,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:38:37.171151  216515 start.go:141] virtualization: kvm guest
	I1020 12:38:37.173632  216515 out.go:179] * [pause-918853] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:38:37.176541  216515 notify.go:220] Checking for updates...
	I1020 12:38:37.176570  216515 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:38:37.177927  216515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:38:37.179545  216515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:38:37.180830  216515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:38:37.182292  216515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:38:37.183884  216515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:38:37.185812  216515 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:37.186365  216515 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:38:37.215979  216515 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:38:37.216183  216515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:38:37.289280  216515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:80 SystemTime:2025-10-20 12:38:37.278081745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:38:37.289453  216515 docker.go:318] overlay module found
	I1020 12:38:37.291743  216515 out.go:179] * Using the docker driver based on existing profile
	I1020 12:38:37.293813  216515 start.go:305] selected driver: docker
	I1020 12:38:37.293829  216515 start.go:925] validating driver "docker" against &{Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:37.293933  216515 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:38:37.294011  216515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:38:37.368892  216515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:95 SystemTime:2025-10-20 12:38:37.358199029 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:38:37.369595  216515 cni.go:84] Creating CNI manager for ""
	I1020 12:38:37.369656  216515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:37.369703  216515 start.go:349] cluster config:
	{Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:37.375912  216515 out.go:179] * Starting "pause-918853" primary control-plane node in "pause-918853" cluster
	I1020 12:38:37.377444  216515 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:38:37.378825  216515 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:38:37.379998  216515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:37.380062  216515 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:38:37.380076  216515 cache.go:58] Caching tarball of preloaded images
	I1020 12:38:37.380060  216515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:38:37.380209  216515 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:38:37.380226  216515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:38:37.380383  216515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/config.json ...
	I1020 12:38:37.407571  216515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:38:37.407594  216515 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:38:37.407607  216515 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:38:37.407640  216515 start.go:360] acquireMachinesLock for pause-918853: {Name:mk965bd38db53d4ac880a0c625135874cb167a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:38:37.407730  216515 start.go:364] duration metric: took 41.997µs to acquireMachinesLock for "pause-918853"
	I1020 12:38:37.407748  216515 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:38:37.407756  216515 fix.go:54] fixHost starting: 
	I1020 12:38:37.408103  216515 cli_runner.go:164] Run: docker container inspect pause-918853 --format={{.State.Status}}
	I1020 12:38:37.430522  216515 fix.go:112] recreateIfNeeded on pause-918853: state=Running err=<nil>
	W1020 12:38:37.430560  216515 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:38:37.979997  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	W1020 12:38:38.002234  210789 cli_runner.go:211] docker container inspect missing-upgrade-123936 --format={{.State.Status}} returned with exit code 1
	I1020 12:38:38.002314  210789 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-123936": docker container inspect missing-upgrade-123936 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123936
	I1020 12:38:38.002326  210789 oci.go:673] temporary error: container missing-upgrade-123936 status is  but expect it to be exited
	I1020 12:38:38.002368  210789 oci.go:88] couldn't shut down missing-upgrade-123936 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-123936": docker container inspect missing-upgrade-123936 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123936
	 
	I1020 12:38:38.002417  210789 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-123936
	I1020 12:38:38.021927  210789 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-123936
	W1020 12:38:38.043882  210789 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-123936 returned with exit code 1
	I1020 12:38:38.043967  210789 cli_runner.go:164] Run: docker network inspect missing-upgrade-123936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:38.065731  210789 cli_runner.go:164] Run: docker network rm missing-upgrade-123936
	I1020 12:38:38.277539  210789 fix.go:124] Sleeping 1 second for extra luck!
	I1020 12:38:39.277682  210789 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:38:39.497738  210789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:38:39.497956  210789 start.go:159] libmachine.API.Create for "missing-upgrade-123936" (driver="docker")
	I1020 12:38:39.497999  210789 client.go:168] LocalClient.Create starting
	I1020 12:38:39.498104  210789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:38:39.498158  210789 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:39.498179  210789 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:39.498298  210789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:38:39.498327  210789 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:39.498340  210789 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:39.498653  210789 cli_runner.go:164] Run: docker network inspect missing-upgrade-123936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:38:39.522181  210789 cli_runner.go:211] docker network inspect missing-upgrade-123936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:38:39.522277  210789 network_create.go:284] running [docker network inspect missing-upgrade-123936] to gather additional debugging logs...
	I1020 12:38:39.522315  210789 cli_runner.go:164] Run: docker network inspect missing-upgrade-123936
	W1020 12:38:39.545186  210789 cli_runner.go:211] docker network inspect missing-upgrade-123936 returned with exit code 1
	I1020 12:38:39.545228  210789 network_create.go:287] error running [docker network inspect missing-upgrade-123936]: docker network inspect missing-upgrade-123936: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-123936 not found
	I1020 12:38:39.545247  210789 network_create.go:289] output of [docker network inspect missing-upgrade-123936]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-123936 not found
	
	** /stderr **
	I1020 12:38:39.545413  210789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:39.567759  210789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:38:39.568734  210789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:38:39.569609  210789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:38:39.570131  210789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:38:39.571135  210789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1da2f5d78723 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8a:9b:da:cb:cc:03} reservation:<nil>}
	I1020 12:38:39.572301  210789 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020842a0}
	I1020 12:38:39.572332  210789 network_create.go:124] attempt to create docker network missing-upgrade-123936 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1020 12:38:39.572406  210789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-123936 missing-upgrade-123936
	I1020 12:38:39.648738  210789 network_create.go:108] docker network missing-upgrade-123936 192.168.94.0/24 created
	I1020 12:38:39.648786  210789 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-123936" container
	I1020 12:38:39.648888  210789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:38:39.672646  210789 cli_runner.go:164] Run: docker volume create missing-upgrade-123936 --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:38:39.693109  210789 oci.go:103] Successfully created a docker volume missing-upgrade-123936
	I1020 12:38:39.693198  210789 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-123936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --entrypoint /usr/bin/test -v missing-upgrade-123936:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1020 12:38:36.568266  215841 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:38:36.568505  215841 start.go:159] libmachine.API.Create for "cert-expiration-365628" (driver="docker")
	I1020 12:38:36.568540  215841 client.go:168] LocalClient.Create starting
	I1020 12:38:36.568623  215841 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:38:36.568665  215841 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.568683  215841 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.568752  215841 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:38:36.568790  215841 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.568816  215841 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.569185  215841 cli_runner.go:164] Run: docker network inspect cert-expiration-365628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:38:36.587910  215841 cli_runner.go:211] docker network inspect cert-expiration-365628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:38:36.587978  215841 network_create.go:284] running [docker network inspect cert-expiration-365628] to gather additional debugging logs...
	I1020 12:38:36.587991  215841 cli_runner.go:164] Run: docker network inspect cert-expiration-365628
	W1020 12:38:36.606011  215841 cli_runner.go:211] docker network inspect cert-expiration-365628 returned with exit code 1
	I1020 12:38:36.606034  215841 network_create.go:287] error running [docker network inspect cert-expiration-365628]: docker network inspect cert-expiration-365628: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-365628 not found
	I1020 12:38:36.606049  215841 network_create.go:289] output of [docker network inspect cert-expiration-365628]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-365628 not found
	
	** /stderr **
	I1020 12:38:36.606246  215841 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:36.626704  215841 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:38:36.627187  215841 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:38:36.627619  215841 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:38:36.628215  215841 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1bc10}
	I1020 12:38:36.628238  215841 network_create.go:124] attempt to create docker network cert-expiration-365628 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 12:38:36.628297  215841 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-365628 cert-expiration-365628
	I1020 12:38:36.692174  215841 network_create.go:108] docker network cert-expiration-365628 192.168.76.0/24 created
	I1020 12:38:36.692200  215841 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-365628" container
	I1020 12:38:36.692292  215841 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:38:36.714335  215841 cli_runner.go:164] Run: docker volume create cert-expiration-365628 --label name.minikube.sigs.k8s.io=cert-expiration-365628 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:38:36.733618  215841 oci.go:103] Successfully created a docker volume cert-expiration-365628
	I1020 12:38:36.733677  215841 cli_runner.go:164] Run: docker run --rm --name cert-expiration-365628-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-365628 --entrypoint /usr/bin/test -v cert-expiration-365628:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:38:37.161122  215841 oci.go:107] Successfully prepared a docker volume cert-expiration-365628
	I1020 12:38:37.161260  215841 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:37.161286  215841 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:38:37.161372  215841 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-365628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:38:36.619206  215874 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:38:36.619480  215874 start.go:159] libmachine.API.Create for "force-systemd-flag-670413" (driver="docker")
	I1020 12:38:36.619514  215874 client.go:168] LocalClient.Create starting
	I1020 12:38:36.619620  215874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:38:36.619654  215874 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.619671  215874 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.619728  215874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:38:36.619747  215874 main.go:141] libmachine: Decoding PEM data...
	I1020 12:38:36.619758  215874 main.go:141] libmachine: Parsing certificate...
	I1020 12:38:36.620109  215874 cli_runner.go:164] Run: docker network inspect force-systemd-flag-670413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:38:36.639598  215874 cli_runner.go:211] docker network inspect force-systemd-flag-670413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:38:36.639686  215874 network_create.go:284] running [docker network inspect force-systemd-flag-670413] to gather additional debugging logs...
	I1020 12:38:36.639707  215874 cli_runner.go:164] Run: docker network inspect force-systemd-flag-670413
	W1020 12:38:36.660895  215874 cli_runner.go:211] docker network inspect force-systemd-flag-670413 returned with exit code 1
	I1020 12:38:36.660933  215874 network_create.go:287] error running [docker network inspect force-systemd-flag-670413]: docker network inspect force-systemd-flag-670413: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-670413 not found
	I1020 12:38:36.660956  215874 network_create.go:289] output of [docker network inspect force-systemd-flag-670413]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-670413 not found
	
	** /stderr **
	I1020 12:38:36.661047  215874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:36.680671  215874 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:38:36.681347  215874 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:38:36.682015  215874 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:38:36.682432  215874 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:38:36.683120  215874 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1da2f5d78723 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8a:9b:da:cb:cc:03} reservation:<nil>}
	I1020 12:38:36.683906  215874 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b134d2f2e79a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:0e:db:e2:b0:64} reservation:<nil>}
	I1020 12:38:36.684795  215874 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f3f240}
	I1020 12:38:36.684824  215874 network_create.go:124] attempt to create docker network force-systemd-flag-670413 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1020 12:38:36.684876  215874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-670413 force-systemd-flag-670413
	I1020 12:38:36.749449  215874 network_create.go:108] docker network force-systemd-flag-670413 192.168.103.0/24 created
	I1020 12:38:36.749486  215874 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-flag-670413" container
	I1020 12:38:36.749588  215874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:38:36.768577  215874 cli_runner.go:164] Run: docker volume create force-systemd-flag-670413 --label name.minikube.sigs.k8s.io=force-systemd-flag-670413 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:38:36.792139  215874 oci.go:103] Successfully created a docker volume force-systemd-flag-670413
	I1020 12:38:36.792236  215874 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-670413-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-670413 --entrypoint /usr/bin/test -v force-systemd-flag-670413:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:38:37.247276  215874 oci.go:107] Successfully prepared a docker volume force-systemd-flag-670413
	I1020 12:38:37.247331  215874 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:37.247357  215874 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:38:37.247426  215874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-670413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:38:37.436368  216515 out.go:252] * Updating the running docker "pause-918853" container ...
	I1020 12:38:37.436425  216515 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:37.436505  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:37.460075  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:37.460433  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:37.460457  216515 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:37.612098  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-918853
	
	I1020 12:38:37.612132  216515 ubuntu.go:182] provisioning hostname "pause-918853"
	I1020 12:38:37.612192  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:37.635804  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:37.636124  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:37.636152  216515 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-918853 && echo "pause-918853" | sudo tee /etc/hostname
	I1020 12:38:37.793335  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-918853
	
	I1020 12:38:37.793412  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:37.814803  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:37.815036  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:37.815065  216515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-918853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-918853/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-918853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:37.961529  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:37.961577  216515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:37.961620  216515 ubuntu.go:190] setting up certificates
	I1020 12:38:37.961641  216515 provision.go:84] configureAuth start
	I1020 12:38:37.961709  216515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918853
	I1020 12:38:37.983035  216515 provision.go:143] copyHostCerts
	I1020 12:38:37.983115  216515 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:37.983139  216515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:37.983225  216515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:37.983382  216515 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:37.983399  216515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:37.983446  216515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:37.983555  216515 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:37.983576  216515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:37.983615  216515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:37.983712  216515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.pause-918853 san=[127.0.0.1 192.168.85.2 localhost minikube pause-918853]
	I1020 12:38:38.306795  216515 provision.go:177] copyRemoteCerts
	I1020 12:38:38.306860  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:38.306915  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:38.328606  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:38.437030  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:38.456033  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 12:38:38.475266  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:38:38.495103  216515 provision.go:87] duration metric: took 533.443588ms to configureAuth
	I1020 12:38:38.495136  216515 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:38:38.495384  216515 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:38.495504  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:38.518802  216515 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:38.519037  216515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1020 12:38:38.519055  216515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:38:43.394706  216515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:38:43.394734  216515 machine.go:96] duration metric: took 5.958299693s to provisionDockerMachine
	I1020 12:38:43.394751  216515 start.go:293] postStartSetup for "pause-918853" (driver="docker")
	I1020 12:38:43.394766  216515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:38:43.394857  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:38:43.394927  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.421893  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.530054  216515 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:38:43.534285  216515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:38:43.534326  216515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:38:43.534340  216515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:38:43.534401  216515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:38:43.534501  216515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:38:43.534633  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:38:43.544484  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:43.564395  216515 start.go:296] duration metric: took 169.626583ms for postStartSetup
	I1020 12:38:43.564496  216515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:43.564561  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.586123  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.687030  216515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:38:43.693622  216515 fix.go:56] duration metric: took 6.285860684s for fixHost
	I1020 12:38:43.693654  216515 start.go:83] releasing machines lock for "pause-918853", held for 6.28591413s
	I1020 12:38:43.693726  216515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918853
	I1020 12:38:43.714455  216515 ssh_runner.go:195] Run: cat /version.json
	I1020 12:38:43.714505  216515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:38:43.714515  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.714583  216515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918853
	I1020 12:38:43.738826  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.739210  216515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/pause-918853/id_rsa Username:docker}
	I1020 12:38:43.945865  216515 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:43.957690  216515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:38:44.024544  216515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:38:44.031152  216515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:38:44.031224  216515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:38:44.045832  216515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:38:44.045864  216515 start.go:495] detecting cgroup driver to use...
	I1020 12:38:44.045902  216515 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:38:44.045954  216515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:38:44.084963  216515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:38:44.109132  216515 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:38:44.109259  216515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:38:44.148637  216515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:38:44.166062  216515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:38:44.290260  216515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:38:44.436804  216515 docker.go:234] disabling docker service ...
	I1020 12:38:44.436875  216515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:38:44.455858  216515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:38:44.477081  216515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:38:44.665371  216515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:38:44.823983  216515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:38:44.839816  216515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:38:44.856290  216515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:38:44.856350  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.924974  216515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:38:44.925055  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.949274  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.961996  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:44.976646  216515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:38:44.988639  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:45.000414  216515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:45.012229  216515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:45.026165  216515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:38:45.037003  216515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:38:45.046683  216515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:45.180887  216515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:38:45.456384  216515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:38:45.456457  216515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:38:45.460963  216515 start.go:563] Will wait 60s for crictl version
	I1020 12:38:45.461035  216515 ssh_runner.go:195] Run: which crictl
	I1020 12:38:45.465347  216515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:38:45.492694  216515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:38:45.492883  216515 ssh_runner.go:195] Run: crio --version
	I1020 12:38:45.533491  216515 ssh_runner.go:195] Run: crio --version
	I1020 12:38:45.568345  216515 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:38:43.500437  210789 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-123936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --entrypoint /usr/bin/test -v missing-upgrade-123936:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (3.807193406s)
	I1020 12:38:43.500463  210789 oci.go:107] Successfully prepared a docker volume missing-upgrade-123936
	I1020 12:38:43.500518  210789 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1020 12:38:43.500541  210789 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:38:43.500857  210789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-123936:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:38:43.218711  215841 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-365628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.05729701s)
	I1020 12:38:43.218736  215841 kic.go:203] duration metric: took 6.057448679s to extract preloaded images to volume ...
	W1020 12:38:43.218836  215841 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:38:43.218871  215841 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:38:43.218914  215841 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:38:43.303571  215841 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-365628 --name cert-expiration-365628 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-365628 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-365628 --network cert-expiration-365628 --ip 192.168.76.2 --volume cert-expiration-365628:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:38:43.653062  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Running}}
	I1020 12:38:43.680528  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Status}}
	I1020 12:38:43.704684  215841 cli_runner.go:164] Run: docker exec cert-expiration-365628 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:38:43.765652  215841 oci.go:144] the created container "cert-expiration-365628" has a running status.
	I1020 12:38:43.765674  215841 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa...
	I1020 12:38:44.218625  215841 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:38:44.410619  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Status}}
	I1020 12:38:44.435060  215841 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:38:44.435074  215841 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-365628 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:38:44.496846  215841 cli_runner.go:164] Run: docker container inspect cert-expiration-365628 --format={{.State.Status}}
	I1020 12:38:44.523642  215841 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:44.523726  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:44.558745  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.559123  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:44.559136  215841 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:44.725760  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-365628
	
	I1020 12:38:44.725802  215841 ubuntu.go:182] provisioning hostname "cert-expiration-365628"
	I1020 12:38:44.725867  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:44.750269  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.750565  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:44.750577  215841 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-365628 && echo "cert-expiration-365628" | sudo tee /etc/hostname
	I1020 12:38:44.948980  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-365628
	
	I1020 12:38:44.949053  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:44.973288  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.973495  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:44.973511  215841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-365628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-365628/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-365628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:45.134845  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:45.134867  215841 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:45.134894  215841 ubuntu.go:190] setting up certificates
	I1020 12:38:45.134907  215841 provision.go:84] configureAuth start
	I1020 12:38:45.134973  215841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-365628
	I1020 12:38:45.156698  215841 provision.go:143] copyHostCerts
	I1020 12:38:45.156752  215841 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:45.156760  215841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:45.156866  215841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:45.156996  215841 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:45.157003  215841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:45.157046  215841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:45.157146  215841 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:45.157151  215841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:45.157188  215841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:45.157279  215841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-365628 san=[127.0.0.1 192.168.76.2 cert-expiration-365628 localhost minikube]
	I1020 12:38:45.855508  215841 provision.go:177] copyRemoteCerts
	I1020 12:38:45.855577  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:45.855623  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:45.877084  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:45.982630  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:46.005544  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1020 12:38:46.027755  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:38:46.046965  215841 provision.go:87] duration metric: took 912.046438ms to configureAuth
	I1020 12:38:46.046991  215841 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:38:46.047180  215841 config.go:182] Loaded profile config "cert-expiration-365628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:46.047301  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.069269  215841 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:46.069592  215841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1020 12:38:46.069610  215841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:38:43.219485  215874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-670413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.972007092s)
	I1020 12:38:43.219514  215874 kic.go:203] duration metric: took 5.972153427s to extract preloaded images to volume ...
	W1020 12:38:43.219604  215874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:38:43.219648  215874 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:38:43.219697  215874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:38:43.302911  215874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-670413 --name force-systemd-flag-670413 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-670413 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-670413 --network force-systemd-flag-670413 --ip 192.168.103.2 --volume force-systemd-flag-670413:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:38:43.788903  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Running}}
	I1020 12:38:43.815067  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Status}}
	I1020 12:38:43.845670  215874 cli_runner.go:164] Run: docker exec force-systemd-flag-670413 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:38:43.908163  215874 oci.go:144] the created container "force-systemd-flag-670413" has a running status.
	I1020 12:38:43.908194  215874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa...
	I1020 12:38:44.393836  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1020 12:38:44.393889  215874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:38:44.464792  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Status}}
	I1020 12:38:44.497632  215874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:38:44.497666  215874 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-670413 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:38:44.572041  215874 cli_runner.go:164] Run: docker container inspect force-systemd-flag-670413 --format={{.State.Status}}
	I1020 12:38:44.597538  215874 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:44.597742  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:44.626658  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.627600  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:44.627622  215874 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:44.784485  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-670413
	
	I1020 12:38:44.784511  215874 ubuntu.go:182] provisioning hostname "force-systemd-flag-670413"
	I1020 12:38:44.784575  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:44.812685  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:44.813014  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:44.813036  215874 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-670413 && echo "force-systemd-flag-670413" | sudo tee /etc/hostname
	I1020 12:38:44.976328  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-670413
	
	I1020 12:38:44.976407  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.000411  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:45.000717  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:45.000754  215874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-670413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-670413/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-670413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:45.154798  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:45.154842  215874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:45.154897  215874 ubuntu.go:190] setting up certificates
	I1020 12:38:45.154909  215874 provision.go:84] configureAuth start
	I1020 12:38:45.154979  215874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-670413
	I1020 12:38:45.175931  215874 provision.go:143] copyHostCerts
	I1020 12:38:45.175980  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:45.176009  215874 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:45.176016  215874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:45.176081  215874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:45.176175  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:45.176199  215874 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:45.176206  215874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:45.176242  215874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:45.176313  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:45.176341  215874 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:45.176350  215874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:45.176377  215874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:45.176448  215874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-670413 san=[127.0.0.1 192.168.103.2 force-systemd-flag-670413 localhost minikube]
	I1020 12:38:45.691704  215874 provision.go:177] copyRemoteCerts
	I1020 12:38:45.691783  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:45.691828  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.713247  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:45.816413  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1020 12:38:45.816470  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:45.846408  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1020 12:38:45.846477  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1020 12:38:45.866696  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1020 12:38:45.866766  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:38:45.888351  215874 provision.go:87] duration metric: took 733.420159ms to configureAuth
	I1020 12:38:45.888383  215874 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:38:45.888556  215874 config.go:182] Loaded profile config "force-systemd-flag-670413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:45.888664  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.911332  215874 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:45.911523  215874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1020 12:38:45.911539  215874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:38:46.191701  215874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:38:46.191729  215874 machine.go:96] duration metric: took 1.594109749s to provisionDockerMachine
	I1020 12:38:46.191749  215874 client.go:171] duration metric: took 9.572227725s to LocalClient.Create
	I1020 12:38:46.191791  215874 start.go:167] duration metric: took 9.572310561s to libmachine.API.Create "force-systemd-flag-670413"
	I1020 12:38:46.191807  215874 start.go:293] postStartSetup for "force-systemd-flag-670413" (driver="docker")
	I1020 12:38:46.191822  215874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:38:46.191890  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:38:46.191939  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:46.214637  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.326050  215874 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:38:46.330392  215874 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:38:46.330427  215874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:38:46.330443  215874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:38:46.330499  215874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:38:46.330586  215874 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:38:46.330600  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> /etc/ssl/certs/145922.pem
	I1020 12:38:46.330688  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:38:46.339829  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:46.365970  215874 start.go:296] duration metric: took 174.145915ms for postStartSetup
	I1020 12:38:46.366426  215874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-670413
	I1020 12:38:46.390475  215874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/config.json ...
	I1020 12:38:46.390834  215874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:46.390894  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:45.572935  216515 cli_runner.go:164] Run: docker network inspect pause-918853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:45.592821  216515 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:38:45.598368  216515 kubeadm.go:883] updating cluster {Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:38:45.598537  216515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:45.598604  216515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:45.637935  216515 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:45.637966  216515 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:38:45.638020  216515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:45.670288  216515 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:45.670319  216515 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:38:45.670328  216515 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:38:45.670470  216515 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-918853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:38:45.670561  216515 ssh_runner.go:195] Run: crio config
	I1020 12:38:45.733472  216515 cni.go:84] Creating CNI manager for ""
	I1020 12:38:45.733493  216515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:45.733509  216515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:38:45.733537  216515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-918853 NodeName:pause-918853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:38:45.733679  216515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-918853"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:38:45.733741  216515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:38:45.743225  216515 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:38:45.743292  216515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:38:45.752226  216515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1020 12:38:45.767150  216515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:38:45.780437  216515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1020 12:38:45.794705  216515 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:38:45.798793  216515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:45.937312  216515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:45.953170  216515 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853 for IP: 192.168.85.2
	I1020 12:38:45.953197  216515 certs.go:195] generating shared ca certs ...
	I1020 12:38:45.953217  216515 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:45.953401  216515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:38:45.953463  216515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:38:45.953478  216515 certs.go:257] generating profile certs ...
	I1020 12:38:45.953586  216515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key
	I1020 12:38:45.953671  216515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/apiserver.key.44a82604
	I1020 12:38:45.953740  216515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/proxy-client.key
	I1020 12:38:45.953936  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:38:45.953984  216515 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:38:45.954008  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:38:45.954041  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:38:45.954078  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:38:45.954114  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:38:45.954177  216515 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:45.955006  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:38:45.977336  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:38:45.999396  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:38:46.019446  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:38:46.040025  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 12:38:46.059673  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:38:46.082343  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:38:46.102835  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:38:46.126376  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:38:46.148484  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:38:46.171011  216515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:38:46.193010  216515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:38:46.209854  216515 ssh_runner.go:195] Run: openssl version
	I1020 12:38:46.217016  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:38:46.227138  216515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:38:46.232076  216515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:38:46.232145  216515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:38:46.273043  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:38:46.282377  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:38:46.293030  216515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:46.298563  216515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:46.298627  216515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:46.346033  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:38:46.357246  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:38:46.368645  216515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:38:46.373534  216515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:38:46.373601  216515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:38:46.421875  216515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:38:46.431665  216515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:38:46.436549  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:38:46.480685  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:38:46.530199  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:38:46.580149  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:38:46.623657  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:38:46.661723  216515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:38:46.714359  216515 kubeadm.go:400] StartCluster: {Name:pause-918853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-918853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:46.714516  216515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:38:46.714589  216515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:38:46.752045  216515 cri.go:89] found id: "72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86"
	I1020 12:38:46.752072  216515 cri.go:89] found id: "c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8"
	I1020 12:38:46.752077  216515 cri.go:89] found id: "8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1"
	I1020 12:38:46.752089  216515 cri.go:89] found id: "ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013"
	I1020 12:38:46.752094  216515 cri.go:89] found id: "84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b"
	I1020 12:38:46.752098  216515 cri.go:89] found id: "e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96"
	I1020 12:38:46.752102  216515 cri.go:89] found id: "9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c"
	I1020 12:38:46.752106  216515 cri.go:89] found id: ""
	I1020 12:38:46.752153  216515 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:38:46.765544  216515 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:38:46Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:38:46.765616  216515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:38:46.774473  216515 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:38:46.774495  216515 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:38:46.774551  216515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:38:46.785315  216515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:38:46.785988  216515 kubeconfig.go:125] found "pause-918853" server: "https://192.168.85.2:8443"
	I1020 12:38:46.786727  216515 kapi.go:59] client config for pause-918853: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key", CAFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:38:46.787148  216515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1020 12:38:46.787162  216515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1020 12:38:46.787167  216515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1020 12:38:46.787170  216515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1020 12:38:46.787174  216515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1020 12:38:46.787539  216515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:38:46.796535  216515 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 12:38:46.796578  216515 kubeadm.go:601] duration metric: took 22.077249ms to restartPrimaryControlPlane
	I1020 12:38:46.796590  216515 kubeadm.go:402] duration metric: took 82.241896ms to StartCluster
	I1020 12:38:46.796623  216515 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:46.796696  216515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:38:46.797952  216515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:46.798230  216515 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:38:46.798288  216515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:38:46.798488  216515 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:46.801114  216515 out.go:179] * Verifying Kubernetes components...
	I1020 12:38:46.801118  216515 out.go:179] * Enabled addons: 
	I1020 12:38:46.802682  216515 addons.go:514] duration metric: took 4.397627ms for enable addons: enabled=[]
	I1020 12:38:46.802726  216515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:46.944829  216515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:46.967350  216515 node_ready.go:35] waiting up to 6m0s for node "pause-918853" to be "Ready" ...
	I1020 12:38:46.976155  216515 node_ready.go:49] node "pause-918853" is "Ready"
	I1020 12:38:46.976179  216515 node_ready.go:38] duration metric: took 8.792396ms for node "pause-918853" to be "Ready" ...
	I1020 12:38:46.976192  216515 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:38:46.976239  216515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:38:46.988996  216515 api_server.go:72] duration metric: took 190.725956ms to wait for apiserver process to appear ...
	I1020 12:38:46.989027  216515 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:38:46.989052  216515 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:38:46.994398  216515 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:38:46.995831  216515 api_server.go:141] control plane version: v1.34.1
	I1020 12:38:46.995862  216515 api_server.go:131] duration metric: took 6.825551ms to wait for apiserver health ...
	I1020 12:38:46.995874  216515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:38:47.000266  216515 system_pods.go:59] 7 kube-system pods found
	I1020 12:38:47.000314  216515 system_pods.go:61] "coredns-66bc5c9577-wnfvn" [456a6380-cb4a-4846-be4c-30bba34b7db3] Running
	I1020 12:38:47.000324  216515 system_pods.go:61] "etcd-pause-918853" [4d89e9c8-70a8-460b-a4ca-b0df9da06427] Running
	I1020 12:38:47.000330  216515 system_pods.go:61] "kindnet-pvqlr" [3bf6c55d-197d-4297-8c7e-7a7032090942] Running
	I1020 12:38:47.000335  216515 system_pods.go:61] "kube-apiserver-pause-918853" [473c0535-ca0a-4385-9403-cea2d0656193] Running
	I1020 12:38:47.000353  216515 system_pods.go:61] "kube-controller-manager-pause-918853" [cfb03e63-d5e2-4aad-8270-7b10ba695e5f] Running
	I1020 12:38:47.000361  216515 system_pods.go:61] "kube-proxy-9md6s" [7ab94d55-c409-4d18-8205-59568b5cfb7a] Running
	I1020 12:38:47.000366  216515 system_pods.go:61] "kube-scheduler-pause-918853" [2f5f0af8-d9ae-41a0-8c75-3d8c7b06a48a] Running
	I1020 12:38:47.000374  216515 system_pods.go:74] duration metric: took 4.493144ms to wait for pod list to return data ...
	I1020 12:38:47.000388  216515 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:38:47.004527  216515 default_sa.go:45] found service account: "default"
	I1020 12:38:47.004551  216515 default_sa.go:55] duration metric: took 4.155159ms for default service account to be created ...
	I1020 12:38:47.004563  216515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:38:47.007615  216515 system_pods.go:86] 7 kube-system pods found
	I1020 12:38:47.007653  216515 system_pods.go:89] "coredns-66bc5c9577-wnfvn" [456a6380-cb4a-4846-be4c-30bba34b7db3] Running
	I1020 12:38:47.007662  216515 system_pods.go:89] "etcd-pause-918853" [4d89e9c8-70a8-460b-a4ca-b0df9da06427] Running
	I1020 12:38:47.007668  216515 system_pods.go:89] "kindnet-pvqlr" [3bf6c55d-197d-4297-8c7e-7a7032090942] Running
	I1020 12:38:47.007674  216515 system_pods.go:89] "kube-apiserver-pause-918853" [473c0535-ca0a-4385-9403-cea2d0656193] Running
	I1020 12:38:47.007680  216515 system_pods.go:89] "kube-controller-manager-pause-918853" [cfb03e63-d5e2-4aad-8270-7b10ba695e5f] Running
	I1020 12:38:47.007686  216515 system_pods.go:89] "kube-proxy-9md6s" [7ab94d55-c409-4d18-8205-59568b5cfb7a] Running
	I1020 12:38:47.007693  216515 system_pods.go:89] "kube-scheduler-pause-918853" [2f5f0af8-d9ae-41a0-8c75-3d8c7b06a48a] Running
	I1020 12:38:47.007705  216515 system_pods.go:126] duration metric: took 3.135625ms to wait for k8s-apps to be running ...
	I1020 12:38:47.007717  216515 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:38:47.007763  216515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:47.023158  216515 system_svc.go:56] duration metric: took 15.430738ms WaitForService to wait for kubelet
	I1020 12:38:47.023195  216515 kubeadm.go:586] duration metric: took 224.931257ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:38:47.023218  216515 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:38:47.026679  216515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:38:47.026710  216515 node_conditions.go:123] node cpu capacity is 8
	I1020 12:38:47.026728  216515 node_conditions.go:105] duration metric: took 3.504646ms to run NodePressure ...
	I1020 12:38:47.026741  216515 start.go:241] waiting for startup goroutines ...
	I1020 12:38:47.026750  216515 start.go:246] waiting for cluster config update ...
	I1020 12:38:47.026760  216515 start.go:255] writing updated cluster config ...
	I1020 12:38:47.027128  216515 ssh_runner.go:195] Run: rm -f paused
	I1020 12:38:47.031140  216515 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:38:47.031998  216515 kapi.go:59] client config for pause-918853: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key", CAFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:38:47.035353  216515 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wnfvn" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.040523  216515 pod_ready.go:94] pod "coredns-66bc5c9577-wnfvn" is "Ready"
	I1020 12:38:47.040544  216515 pod_ready.go:86] duration metric: took 5.17249ms for pod "coredns-66bc5c9577-wnfvn" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.042835  216515 pod_ready.go:83] waiting for pod "etcd-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.047139  216515 pod_ready.go:94] pod "etcd-pause-918853" is "Ready"
	I1020 12:38:47.047165  216515 pod_ready.go:86] duration metric: took 4.30683ms for pod "etcd-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.049481  216515 pod_ready.go:83] waiting for pod "kube-apiserver-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.054237  216515 pod_ready.go:94] pod "kube-apiserver-pause-918853" is "Ready"
	I1020 12:38:47.054261  216515 pod_ready.go:86] duration metric: took 4.752013ms for pod "kube-apiserver-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.056582  216515 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.439578  216515 pod_ready.go:94] pod "kube-controller-manager-pause-918853" is "Ready"
	I1020 12:38:47.439602  216515 pod_ready.go:86] duration metric: took 382.99886ms for pod "kube-controller-manager-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:47.636219  216515 pod_ready.go:83] waiting for pod "kube-proxy-9md6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.035585  216515 pod_ready.go:94] pod "kube-proxy-9md6s" is "Ready"
	I1020 12:38:48.035609  216515 pod_ready.go:86] duration metric: took 399.368026ms for pod "kube-proxy-9md6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.236170  216515 pod_ready.go:83] waiting for pod "kube-scheduler-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.635349  216515 pod_ready.go:94] pod "kube-scheduler-pause-918853" is "Ready"
	I1020 12:38:48.635375  216515 pod_ready.go:86] duration metric: took 399.180697ms for pod "kube-scheduler-pause-918853" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:38:48.635386  216515 pod_ready.go:40] duration metric: took 1.604209536s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:38:48.681854  216515 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:38:48.791968  216515 out.go:179] * Done! kubectl is now configured to use "pause-918853" cluster and "default" namespace by default
	I1020 12:38:46.354827  215841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:38:46.354848  215841 machine.go:96] duration metric: took 1.831192867s to provisionDockerMachine
	I1020 12:38:46.354859  215841 client.go:171] duration metric: took 9.786313178s to LocalClient.Create
	I1020 12:38:46.354880  215841 start.go:167] duration metric: took 9.786377291s to libmachine.API.Create "cert-expiration-365628"
	I1020 12:38:46.354888  215841 start.go:293] postStartSetup for "cert-expiration-365628" (driver="docker")
	I1020 12:38:46.354900  215841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:38:46.354981  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:38:46.355026  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.379997  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.487863  215841 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:38:46.491927  215841 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:38:46.491952  215841 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:38:46.491964  215841 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:38:46.492032  215841 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:38:46.492151  215841 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:38:46.492277  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:38:46.500886  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:46.524757  215841 start.go:296] duration metric: took 169.855258ms for postStartSetup
	I1020 12:38:46.525253  215841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-365628
	I1020 12:38:46.547734  215841 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/config.json ...
	I1020 12:38:46.548094  215841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:46.548140  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.570631  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.677196  215841 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:38:46.683234  215841 start.go:128] duration metric: took 10.118415299s to createHost
	I1020 12:38:46.683254  215841 start.go:83] releasing machines lock for "cert-expiration-365628", held for 10.118544604s
	I1020 12:38:46.683327  215841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-365628
	I1020 12:38:46.704393  215841 ssh_runner.go:195] Run: cat /version.json
	I1020 12:38:46.704427  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.704508  215841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:38:46.704578  215841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-365628
	I1020 12:38:46.725680  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.727099  215841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/cert-expiration-365628/id_rsa Username:docker}
	I1020 12:38:46.904970  215841 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:46.913669  215841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:38:46.964964  215841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:38:46.970674  215841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:38:46.970731  215841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:38:47.004318  215841 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:38:47.004330  215841 start.go:495] detecting cgroup driver to use...
	I1020 12:38:47.004364  215841 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:38:47.004407  215841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:38:47.024654  215841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:38:47.040690  215841 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:38:47.040734  215841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:38:47.062030  215841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:38:47.085942  215841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:38:47.186718  215841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:38:47.299460  215841 docker.go:234] disabling docker service ...
	I1020 12:38:47.299509  215841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:38:47.318691  215841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:38:47.332867  215841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:38:47.446888  215841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:38:47.532073  215841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:38:47.544858  215841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:38:47.559880  215841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:38:47.559934  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.612228  215841 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:38:47.612283  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.622029  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.631503  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.705380  215841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:38:47.714162  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.762944  215841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.896078  215841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:48.019494  215841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:38:48.028145  215841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:38:48.036820  215841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:48.123428  215841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:38:49.099330  215841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:38:49.099419  215841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:38:49.106492  215841 start.go:563] Will wait 60s for crictl version
	I1020 12:38:49.106560  215841 ssh_runner.go:195] Run: which crictl
	I1020 12:38:49.113298  215841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:38:49.144648  215841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:38:49.144742  215841 ssh_runner.go:195] Run: crio --version
	I1020 12:38:49.179879  215841 ssh_runner.go:195] Run: crio --version
	I1020 12:38:46.411891  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.517411  215874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:38:46.523012  215874 start.go:128] duration metric: took 9.905815707s to createHost
	I1020 12:38:46.523044  215874 start.go:83] releasing machines lock for "force-systemd-flag-670413", held for 9.905997323s
	I1020 12:38:46.523119  215874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-670413
	I1020 12:38:46.544420  215874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:38:46.544466  215874 ssh_runner.go:195] Run: cat /version.json
	I1020 12:38:46.544504  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:46.544523  215874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-670413
	I1020 12:38:46.566569  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.569700  215874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/force-systemd-flag-670413/id_rsa Username:docker}
	I1020 12:38:46.743853  215874 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:46.752201  215874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:38:46.796728  215874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:38:46.802751  215874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:38:46.802830  215874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:38:46.833693  215874 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:38:46.833739  215874 start.go:495] detecting cgroup driver to use...
	I1020 12:38:46.833755  215874 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1020 12:38:46.833825  215874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:38:46.860656  215874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:38:46.874216  215874 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:38:46.874277  215874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:38:46.894288  215874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:38:46.917602  215874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:38:47.032940  215874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:38:47.150282  215874 docker.go:234] disabling docker service ...
	I1020 12:38:47.150356  215874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:38:47.169385  215874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:38:47.185367  215874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:38:47.298927  215874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:38:47.395851  215874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:38:47.409756  215874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:38:47.424202  215874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:38:47.424296  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.439824  215874 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:38:47.439882  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.505005  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.568947  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.600007  215874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:38:47.609505  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.619610  215874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.705434  215874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:38:47.762941  215874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:38:47.772019  215874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:38:47.779566  215874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:47.859586  215874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:38:49.093904  215874 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.234279705s)
	I1020 12:38:49.093940  215874 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:38:49.094069  215874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:38:49.100668  215874 start.go:563] Will wait 60s for crictl version
	I1020 12:38:49.100723  215874 ssh_runner.go:195] Run: which crictl
	I1020 12:38:49.106157  215874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:38:49.146029  215874 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:38:49.146246  215874 ssh_runner.go:195] Run: crio --version
	I1020 12:38:49.182539  215874 ssh_runner.go:195] Run: crio --version
	I1020 12:38:49.219194  215841 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:38:49.220802  215874 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:38:48.990436  210789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-123936:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.489501875s)
	I1020 12:38:48.990480  210789 kic.go:203] duration metric: took 5.4899238s to extract preloaded images to volume ...
	W1020 12:38:48.990574  210789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:38:48.990610  210789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:38:48.990657  210789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:38:49.066575  210789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-123936 --name missing-upgrade-123936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-123936 --network missing-upgrade-123936 --ip 192.168.94.2 --volume missing-upgrade-123936:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1020 12:38:49.408836  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Running}}
	I1020 12:38:49.430320  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	I1020 12:38:49.452884  210789 cli_runner.go:164] Run: docker exec missing-upgrade-123936 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:38:49.505343  210789 oci.go:144] the created container "missing-upgrade-123936" has a running status.
	I1020 12:38:49.505379  210789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/missing-upgrade-123936/id_rsa...
	I1020 12:38:49.667457  210789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/missing-upgrade-123936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:38:49.701914  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	I1020 12:38:49.729955  210789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:38:49.729980  210789 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-123936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:38:49.800378  210789 cli_runner.go:164] Run: docker container inspect missing-upgrade-123936 --format={{.State.Status}}
	I1020 12:38:49.825389  210789 machine.go:93] provisionDockerMachine start ...
	I1020 12:38:49.825513  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:49.846904  210789 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:49.847225  210789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1020 12:38:49.847247  210789 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:38:49.982915  210789 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-123936
	
	I1020 12:38:49.982939  210789 ubuntu.go:182] provisioning hostname "missing-upgrade-123936"
	I1020 12:38:49.983010  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:50.004554  210789 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:50.004857  210789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1020 12:38:50.004876  210789 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-123936 && echo "missing-upgrade-123936" | sudo tee /etc/hostname
	I1020 12:38:50.144001  210789 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-123936
	
	I1020 12:38:50.144087  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:50.164505  210789 main.go:141] libmachine: Using SSH client type: native
	I1020 12:38:50.164789  210789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1020 12:38:50.164830  210789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-123936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-123936/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-123936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:38:50.284484  210789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:38:50.284534  210789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:38:50.284578  210789 ubuntu.go:190] setting up certificates
	I1020 12:38:50.284591  210789 provision.go:84] configureAuth start
	I1020 12:38:50.284652  210789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-123936
	I1020 12:38:50.303830  210789 provision.go:143] copyHostCerts
	I1020 12:38:50.303900  210789 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:38:50.303915  210789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:38:50.303993  210789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:38:50.304107  210789 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:38:50.304118  210789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:38:50.304161  210789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:38:50.304251  210789 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:38:50.304261  210789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:38:50.304299  210789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:38:50.304375  210789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-123936 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-123936]
	I1020 12:38:50.611461  210789 provision.go:177] copyRemoteCerts
	I1020 12:38:50.611597  210789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:38:50.611645  210789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123936
	I1020 12:38:50.633842  210789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/missing-upgrade-123936/id_rsa Username:docker}
	I1020 12:38:50.723870  210789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:38:49.221763  215841 cli_runner.go:164] Run: docker network inspect cert-expiration-365628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:49.246591  215841 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 12:38:49.251314  215841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:38:49.265641  215841 kubeadm.go:883] updating cluster {Name:cert-expiration-365628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-365628 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:38:49.265745  215841 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:49.265810  215841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:49.306574  215841 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:49.306585  215841 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:38:49.306628  215841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:49.338798  215841 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:49.338811  215841 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:38:49.338818  215841 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 12:38:49.338901  215841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-365628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-365628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:38:49.338969  215841 ssh_runner.go:195] Run: crio config
	I1020 12:38:49.411495  215841 cni.go:84] Creating CNI manager for ""
	I1020 12:38:49.411509  215841 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:49.411529  215841 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:38:49.411555  215841 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-365628 NodeName:cert-expiration-365628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:38:49.411685  215841 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-365628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:38:49.411736  215841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:38:49.423020  215841 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:38:49.423102  215841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:38:49.434854  215841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1020 12:38:49.451433  215841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:38:49.470977  215841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1020 12:38:49.487931  215841 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:38:49.492659  215841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:38:49.506444  215841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:49.620183  215841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:49.638271  215841 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628 for IP: 192.168.76.2
	I1020 12:38:49.638281  215841 certs.go:195] generating shared ca certs ...
	I1020 12:38:49.638298  215841 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:49.638425  215841 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:38:49.638549  215841 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:38:49.638557  215841 certs.go:257] generating profile certs ...
	I1020 12:38:49.638613  215841 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.key
	I1020 12:38:49.638632  215841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.crt with IP's: []
	I1020 12:38:50.037186  215841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.crt ...
	I1020 12:38:50.037200  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.crt: {Name:mk88fa910f6396b666b21ba54195fc932bfe6023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.037354  215841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.key ...
	I1020 12:38:50.037361  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/client.key: {Name:mk809d75f9e9126fb1947c3690d9a240b4eae2a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.037441  215841 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238
	I1020 12:38:50.037451  215841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1020 12:38:50.323206  215841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238 ...
	I1020 12:38:50.323221  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238: {Name:mkec34f54537cb16624e5b7414e45bec2703ea6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.323393  215841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238 ...
	I1020 12:38:50.323415  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238: {Name:mk3d60da0cb52b99fb41481c738eccadba6f746b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.323490  215841 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt.2340d238 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt
	I1020 12:38:50.323575  215841 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key.2340d238 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key
	I1020 12:38:50.323629  215841 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key
	I1020 12:38:50.323639  215841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt with IP's: []
	I1020 12:38:50.419917  215841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt ...
	I1020 12:38:50.419934  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt: {Name:mka483e947e1f4ab237e0ac8828cedb4fba55513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.420100  215841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key ...
	I1020 12:38:50.420107  215841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key: {Name:mk72e3509b8c9d8468f70f839997baed0b9f638c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.420276  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:38:50.420305  215841 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:38:50.420311  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:38:50.420332  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:38:50.420356  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:38:50.420380  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:38:50.420433  215841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:50.421034  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:38:50.439402  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:38:50.456850  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:38:50.474471  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:38:50.492236  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1020 12:38:50.510755  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:38:50.528158  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:38:50.545252  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/cert-expiration-365628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:38:50.563038  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:38:50.585738  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:38:50.610060  215841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:38:50.631474  215841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:38:50.645232  215841 ssh_runner.go:195] Run: openssl version
	I1020 12:38:50.651614  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:38:50.660810  215841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:38:50.664999  215841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:38:50.665046  215841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:38:50.700132  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:38:50.710233  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:38:50.721395  215841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:38:50.726887  215841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:38:50.726937  215841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:38:50.770009  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:38:50.779185  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:38:50.789209  215841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:50.793273  215841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:50.793319  215841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:50.838450  215841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:38:50.848371  215841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:38:50.852838  215841 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:38:50.852893  215841 kubeadm.go:400] StartCluster: {Name:cert-expiration-365628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-365628 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:50.852972  215841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:38:50.853025  215841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:38:50.889271  215841 cri.go:89] found id: ""
	I1020 12:38:50.889328  215841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:38:50.901390  215841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:38:50.911601  215841 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:38:50.911640  215841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:38:50.920934  215841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:38:50.920944  215841 kubeadm.go:157] found existing configuration files:
	
	I1020 12:38:50.920990  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:38:50.929123  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:38:50.929170  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:38:50.937478  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:38:50.945613  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:38:50.945661  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:38:50.953547  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:38:50.962473  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:38:50.962526  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:38:50.971124  215841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:38:50.979663  215841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:38:50.979715  215841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:38:50.987591  215841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:38:51.037328  215841 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:38:51.037391  215841 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:38:51.069722  215841 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:38:51.069812  215841 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:38:51.069843  215841 kubeadm.go:318] OS: Linux
	I1020 12:38:51.069889  215841 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:38:51.069935  215841 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:38:51.070002  215841 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:38:51.070063  215841 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:38:51.070103  215841 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:38:51.070167  215841 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:38:51.070238  215841 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:38:51.070315  215841 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:38:51.166444  215841 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:38:51.166560  215841 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:38:51.166989  215841 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:38:51.177906  215841 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:38:51.180870  215841 out.go:252]   - Generating certificates and keys ...
	I1020 12:38:51.180984  215841 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:38:51.181101  215841 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:38:49.226549  215874 cli_runner.go:164] Run: docker network inspect force-systemd-flag-670413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:38:49.249979  215874 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1020 12:38:49.254482  215874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:38:49.265736  215874 kubeadm.go:883] updating cluster {Name:force-systemd-flag-670413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-670413 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:38:49.265869  215874 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:38:49.265937  215874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:49.307473  215874 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:49.307497  215874 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:38:49.307543  215874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:38:49.337378  215874 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:38:49.337408  215874 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:38:49.337419  215874 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1020 12:38:49.337524  215874 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-670413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-670413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:38:49.337617  215874 ssh_runner.go:195] Run: crio config
	I1020 12:38:49.393877  215874 cni.go:84] Creating CNI manager for ""
	I1020 12:38:49.393903  215874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:38:49.393920  215874 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:38:49.393942  215874 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-670413 NodeName:force-systemd-flag-670413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:38:49.394068  215874 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-670413"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:38:49.394175  215874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:38:49.403472  215874 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:38:49.403543  215874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:38:49.413244  215874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1020 12:38:49.429014  215874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:38:49.448528  215874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1020 12:38:49.465704  215874 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:38:49.470587  215874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:38:49.484093  215874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:38:49.610564  215874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:38:49.636716  215874 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413 for IP: 192.168.103.2
	I1020 12:38:49.636740  215874 certs.go:195] generating shared ca certs ...
	I1020 12:38:49.636759  215874 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:49.636917  215874 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:38:49.636975  215874 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:38:49.636984  215874 certs.go:257] generating profile certs ...
	I1020 12:38:49.637051  215874 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/client.key
	I1020 12:38:49.637069  215874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/client.crt with IP's: []
	I1020 12:38:49.999684  215874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/client.crt ...
	I1020 12:38:49.999714  215874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/client.crt: {Name:mkf7833485c5e108beef642f3b98102f1f80ab65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:49.999912  215874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/client.key ...
	I1020 12:38:49.999931  215874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/client.key: {Name:mk2567e7cb18ddb62242b3a6d7cecb30cd4a177a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.000159  215874 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key.ee9766ec
	I1020 12:38:50.000193  215874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt.ee9766ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1020 12:38:50.716576  215874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt.ee9766ec ...
	I1020 12:38:50.716602  215874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt.ee9766ec: {Name:mk02c81e252ab1b97f04d4da3684ca09a1504b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.716755  215874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key.ee9766ec ...
	I1020 12:38:50.716782  215874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key.ee9766ec: {Name:mk78ece10be652b1956c38bb342d732f8fa0842b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:50.716894  215874 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt.ee9766ec -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt
	I1020 12:38:50.716970  215874 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key.ee9766ec -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key
	I1020 12:38:50.717020  215874 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.key
	I1020 12:38:50.717032  215874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.crt with IP's: []
	I1020 12:38:51.204598  215874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.crt ...
	I1020 12:38:51.204631  215874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.crt: {Name:mk8b067d7fdcb6f0d3e0602a4997ce4522899f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:51.204851  215874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.key ...
	I1020 12:38:51.204871  215874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.key: {Name:mkb2e4c3ae4261f3945eece28fa98b4524ab2b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:38:51.204973  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1020 12:38:51.204995  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1020 12:38:51.205010  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1020 12:38:51.205024  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1020 12:38:51.205049  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1020 12:38:51.205063  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1020 12:38:51.205077  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1020 12:38:51.205090  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1020 12:38:51.205149  215874 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:38:51.205196  215874 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:38:51.205207  215874 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:38:51.205238  215874 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:38:51.205263  215874 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:38:51.205288  215874 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:38:51.205335  215874 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:38:51.205382  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:51.205397  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem -> /usr/share/ca-certificates/14592.pem
	I1020 12:38:51.205411  215874 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> /usr/share/ca-certificates/145922.pem
	I1020 12:38:51.206296  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:38:51.226108  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:38:51.245300  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:38:51.266644  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:38:51.290923  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1020 12:38:51.312307  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:38:51.336930  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:38:51.361666  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/force-systemd-flag-670413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:38:51.382348  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:38:51.404257  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:38:51.422885  215874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:38:51.440569  215874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:38:51.455540  215874 ssh_runner.go:195] Run: openssl version
	I1020 12:38:51.462090  215874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:38:51.472701  215874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:38:51.477103  215874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:38:51.477163  215874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:38:51.516634  215874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:38:51.527654  215874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:38:51.537864  215874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:51.541995  215874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:51.542061  215874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:38:51.583461  215874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:38:51.594143  215874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:38:51.603326  215874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:38:51.607348  215874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:38:51.607406  215874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:38:51.654812  215874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:38:51.665825  215874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:38:51.670208  215874 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:38:51.670271  215874 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-670413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-670413 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:38:51.670350  215874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:38:51.670413  215874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:38:51.701341  215874 cri.go:89] found id: ""
	I1020 12:38:51.701410  215874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:38:51.710627  215874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:38:51.718710  215874 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:38:51.718787  215874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:38:51.726969  215874 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:38:51.726988  215874 kubeadm.go:157] found existing configuration files:
	
	I1020 12:38:51.727053  215874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:38:51.735796  215874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:38:51.735862  215874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:38:51.743874  215874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:38:51.751852  215874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:38:51.751910  215874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:38:51.759905  215874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:38:51.768321  215874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:38:51.768388  215874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:38:51.777411  215874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:38:51.786125  215874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:38:51.786187  215874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:38:51.795039  215874 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:38:51.840057  215874 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:38:51.840149  215874 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:38:51.867033  215874 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:38:51.867135  215874 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:38:51.867181  215874 kubeadm.go:318] OS: Linux
	I1020 12:38:51.867253  215874 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:38:51.867317  215874 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:38:51.867379  215874 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:38:51.867441  215874 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:38:51.867501  215874 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:38:51.867562  215874 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:38:51.867626  215874 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:38:51.867683  215874 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:38:51.937930  215874 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:38:51.938097  215874 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:38:51.938224  215874 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:38:51.947324  215874 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.385273711Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.3861969Z" level=info msg="Conmon does support the --sync option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.386222996Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.386244082Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.387029823Z" level=info msg="Conmon does support the --sync option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.387052157Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.391544344Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.391572397Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.392137484Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.392606422Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.392667654Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.399015633Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.447214019Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-wnfvn Namespace:kube-system ID:3fc81582749368af7068e357f0baf0831f08bc049fdbeb81a77e6e49757ebd1f UID:456a6380-cb4a-4846-be4c-30bba34b7db3 NetNS:/var/run/netns/68bba9c1-7708-480c-9316-a7b7cc090194 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a080}] Aliases:map[]}"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.447526322Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-wnfvn for CNI network kindnet (type=ptp)"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448083607Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448115021Z" level=info msg="Starting seccomp notifier watcher"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448175726Z" level=info msg="Create NRI interface"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448291543Z" level=info msg="built-in NRI default validator is disabled"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448302133Z" level=info msg="runtime interface created"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448314981Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448322967Z" level=info msg="runtime interface starting up..."
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448330642Z" level=info msg="starting plugins..."
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448359954Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 20 12:38:45 pause-918853 crio[2216]: time="2025-10-20T12:38:45.448766569Z" level=info msg="No systemd watchdog enabled"
	Oct 20 12:38:45 pause-918853 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	72a3b202a7641       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   19 seconds ago      Running             coredns                   0                   3fc8158274936       coredns-66bc5c9577-wnfvn               kube-system
	c9e90a7b75b16       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   30 seconds ago      Running             kube-proxy                0                   a8d026ff182de       kube-proxy-9md6s                       kube-system
	8845cf52f71fb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   30 seconds ago      Running             kindnet-cni               0                   f57d3471af5f7       kindnet-pvqlr                          kube-system
	ee48e32b2f57c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   41 seconds ago      Running             etcd                      0                   6d2c55e6a2370       etcd-pause-918853                      kube-system
	84c4c4d5781d5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   41 seconds ago      Running             kube-apiserver            0                   09d05aae12bf9       kube-apiserver-pause-918853            kube-system
	e0e2d9777d82f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   41 seconds ago      Running             kube-scheduler            0                   9c59c659ea945       kube-scheduler-pause-918853            kube-system
	9f83eedefaca2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   41 seconds ago      Running             kube-controller-manager   0                   9bf6d6bab963b       kube-controller-manager-pause-918853   kube-system
	
	
	==> coredns [72a3b202a76412f26700ad62c38784891b6c00402b287588a8795c5e217ecc86] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54848 - 49809 "HINFO IN 8001088817092132921.4386291470931721400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020392259s
	
	
	==> describe nodes <==
	Name:               pause-918853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-918853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=pause-918853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_38_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:38:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-918853
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:38:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:38:47 +0000   Mon, 20 Oct 2025 12:38:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-918853
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4fc86615-9ae4-4756-b290-33e6674fa76f
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wnfvn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-pause-918853                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-pvqlr                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-pause-918853             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-pause-918853    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-9md6s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-pause-918853             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30s   kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node pause-918853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node pause-918853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node pause-918853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node pause-918853 event: Registered Node pause-918853 in Controller
	  Normal  NodeReady                20s   kubelet          Node pause-918853 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [ee48e32b2f57cf831f9662b4a8970dd4580fe5fff3bbd3ab9b8a106a97178013] <==
	{"level":"warn","ts":"2025-10-20T12:38:13.908725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.917899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.931870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.938922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.945954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.952811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.961855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.969528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.975866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.981962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:13.988252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.002165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.010487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.021511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.028520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.035182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.048229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.054701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.069622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.078577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.086797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:14.138229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:38:41.445746Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.344722ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502342613536 > lease_revoke:<id:06ed9a01a09639a9>","response":"size:28"}
	{"level":"warn","ts":"2025-10-20T12:38:48.017937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.715567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-9md6s\" limit:1 ","response":"range_response_count:1 size:5033"}
	{"level":"info","ts":"2025-10-20T12:38:48.018005Z","caller":"traceutil/trace.go:172","msg":"trace[61134928] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-9md6s; range_end:; response_count:1; response_revision:406; }","duration":"183.827672ms","start":"2025-10-20T12:38:47.834163Z","end":"2025-10-20T12:38:48.017991Z","steps":["trace[61134928] 'range keys from in-memory index tree'  (duration: 183.541493ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:38:53 up  1:21,  0 user,  load average: 5.54, 3.02, 1.70
	Linux pause-918853 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8845cf52f71fb552c506de59a81f18ebd549bf1903b7034a503f9a73ce2b6fd1] <==
	I1020 12:38:23.374021       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:38:23.374354       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:38:23.374513       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:38:23.374532       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:38:23.374562       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:38:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:38:23.579699       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:38:23.579736       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:38:23.579749       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:38:23.775517       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:38:23.882509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:38:23.882537       1 metrics.go:72] Registering metrics
	I1020 12:38:23.882584       1 controller.go:711] "Syncing nftables rules"
	I1020 12:38:33.583859       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:38:33.583939       1 main.go:301] handling current node
	I1020 12:38:43.585870       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:38:43.585913       1 main.go:301] handling current node
	I1020 12:38:53.583891       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:38:53.583950       1 main.go:301] handling current node
	
	
	==> kube-apiserver [84c4c4d5781d5e4a18aa2f86b8f181bb6608c642b20ac03d501b5e5dcf22e42b] <==
	I1020 12:38:14.710169       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1020 12:38:14.710828       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:38:14.713203       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:38:14.713482       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:14.713645       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 12:38:14.718257       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:14.718479       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:38:14.883843       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:38:15.587707       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:38:15.591972       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:38:15.592003       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:38:16.125581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:38:16.166162       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:38:16.292061       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:38:16.299809       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1020 12:38:16.301234       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:38:16.306895       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:38:16.620359       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:38:17.072165       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:38:17.082559       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:38:17.090865       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:38:22.472852       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:38:22.584524       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:22.589614       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:38:22.688192       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9f83eedefaca2713366a42166d43e08671f32da7f80f270c7d3e27b91389998c] <==
	I1020 12:38:21.619639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:38:21.619657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:38:21.619667       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:38:21.619848       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:38:21.621486       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:38:21.621520       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 12:38:21.621529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:38:21.621552       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 12:38:21.621568       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:38:21.621595       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:38:21.621613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:38:21.621625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:38:21.621598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:38:21.621628       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:38:21.621614       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:38:21.622071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:38:21.622959       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 12:38:21.623086       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 12:38:21.624269       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 12:38:21.628543       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:38:21.629744       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:38:21.630951       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:38:21.637154       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:38:21.644617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:38:36.572052       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c9e90a7b75b16fc8e8ef756cc88964cd974e6f53a9390b604d0d29be3e4e48e8] <==
	I1020 12:38:23.122231       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:38:23.178280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:38:23.279132       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:38:23.279176       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:38:23.279304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:38:23.299351       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:38:23.300409       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:38:23.306218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:38:23.306613       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:38:23.306630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:38:23.307914       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:38:23.307941       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:38:23.307951       1 config.go:200] "Starting service config controller"
	I1020 12:38:23.307970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:38:23.307994       1 config.go:309] "Starting node config controller"
	I1020 12:38:23.308003       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:38:23.308180       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:38:23.308194       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:38:23.408388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:38:23.408395       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:38:23.408406       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:38:23.408504       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e0e2d9777d82f4ff2db4444ef7768324a1d003e72c9d5d301c966ab348bbfb96] <==
	E1020 12:38:14.645248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:38:14.645271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:38:14.645304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:38:14.645334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:38:14.645337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:38:14.645404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:38:14.645439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:38:14.645517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:38:14.645583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:38:14.645714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:38:14.645737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:38:15.490498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:38:15.500040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:38:15.500959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:38:15.554107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:38:15.634302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:38:15.663351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:38:15.663379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:38:15.704016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:38:15.902726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:38:15.938294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:38:15.938399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:38:15.948278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:38:16.086939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1020 12:38:18.041521       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908715    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908816    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908854    1358 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.908866    1358 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.979194    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.979258    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:42 pause-918853 kubelet[1358]: E1020 12:38:42.979276    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:43 pause-918853 kubelet[1358]: W1020 12:38:43.195493    1358 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 20 12:38:43 pause-918853 kubelet[1358]: E1020 12:38:43.980292    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:43 pause-918853 kubelet[1358]: E1020 12:38:43.980359    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:43 pause-918853 kubelet[1358]: E1020 12:38:43.980377    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908510    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908591    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908614    1358 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.908632    1358 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.980599    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.980675    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:44 pause-918853 kubelet[1358]: E1020 12:38:44.980696    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:45 pause-918853 kubelet[1358]: E1020 12:38:45.981630    1358 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 20 12:38:45 pause-918853 kubelet[1358]: E1020 12:38:45.981690    1358 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:45 pause-918853 kubelet[1358]: E1020 12:38:45.981710    1358 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 20 12:38:49 pause-918853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:38:49 pause-918853 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:38:49 pause-918853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:38:49 pause-918853 systemd[1]: kubelet.service: Consumed 1.383s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-918853 -n pause-918853
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-918853 -n pause-918853: exit status 2 (349.923903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-918853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.849445ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:40:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-384253 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-384253 describe deploy/metrics-server -n kube-system: exit status 1 (58.893294ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-384253 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-384253
helpers_test.go:243: (dbg) docker inspect old-k8s-version-384253:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3",
	        "Created": "2025-10-20T12:39:15.199417657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:39:15.24998648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/hosts",
	        "LogPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3-json.log",
	        "Name": "/old-k8s-version-384253",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-384253:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-384253",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3",
	                "LowerDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-384253",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-384253/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-384253",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-384253",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-384253",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71013e6e6811cce03208105e396827b56b87a70a7deb5e85325a96f7c2c3502b",
	            "SandboxKey": "/var/run/docker/netns/71013e6e6811",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-384253": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:29:b5:c3:0f:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "297cbf1591dbfc42eff4519f7180072339a2b6c16821ef2400eadb774f669261",
	                    "EndpointID": "d2592f8e3750702e878fccdf6daecf46ff9755c9ed3695675cc48cd46e6b6914",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-384253",
	                        "42a1b3150f06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384253 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-384253 logs -n 25: (1.007140864s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-312375 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo containerd config dump                                                                                                                                                                                                  │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo crio config                                                                                                                                                                                                             │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p cilium-312375                                                                                                                                                                                                                              │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-365628    │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p force-systemd-flag-670413 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ pause   │ -p pause-918853 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p pause-918853                                                                                                                                                                                                                               │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-options-418869 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p missing-upgrade-123936                                                                                                                                                                                                                     │ missing-upgrade-123936    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ force-systemd-flag-670413 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p force-systemd-flag-670413                                                                                                                                                                                                                  │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ cert-options-418869 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ -p cert-options-418869 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p cert-options-418869                                                                                                                                                                                                                        │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:39:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:39:36.143350  236655 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:39:36.143581  236655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:39:36.143589  236655 out.go:374] Setting ErrFile to fd 2...
	I1020 12:39:36.143593  236655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:39:36.143793  236655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:39:36.144270  236655 out.go:368] Setting JSON to false
	I1020 12:39:36.145424  236655 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4925,"bootTime":1760959051,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:39:36.145518  236655 start.go:141] virtualization: kvm guest
	I1020 12:39:36.147647  236655 out.go:179] * [kubernetes-upgrade-196539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:39:36.149055  236655 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:39:36.149060  236655 notify.go:220] Checking for updates...
	I1020 12:39:36.150507  236655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:39:36.152050  236655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:39:36.153480  236655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:39:36.154793  236655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:39:36.156137  236655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:39:36.158078  236655 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 12:39:36.158708  236655 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:39:36.184917  236655 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:39:36.185006  236655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:39:36.247411  236655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-20 12:39:36.236428653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:39:36.247509  236655 docker.go:318] overlay module found
	I1020 12:39:36.249461  236655 out.go:179] * Using the docker driver based on existing profile
	I1020 12:39:36.250845  236655 start.go:305] selected driver: docker
	I1020 12:39:36.250860  236655 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:39:36.250940  236655 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:39:36.251521  236655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:39:36.310754  236655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-20 12:39:36.301088522 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:39:36.311161  236655 cni.go:84] Creating CNI manager for ""
	I1020 12:39:36.311230  236655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:39:36.311275  236655 start.go:349] cluster config:
	{Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:39:36.316888  236655 out.go:179] * Starting "kubernetes-upgrade-196539" primary control-plane node in "kubernetes-upgrade-196539" cluster
	I1020 12:39:36.318433  236655 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:39:36.319865  236655 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:39:36.321207  236655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:39:36.321253  236655 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:39:36.321271  236655 cache.go:58] Caching tarball of preloaded images
	I1020 12:39:36.321357  236655 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:39:36.321382  236655 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:39:36.321393  236655 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:39:36.321506  236655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/config.json ...
	I1020 12:39:36.342885  236655 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:39:36.342906  236655 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:39:36.342924  236655 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:39:36.342951  236655 start.go:360] acquireMachinesLock for kubernetes-upgrade-196539: {Name:mk1d06f9572547ac12885711cb1bcf0c77e257ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:39:36.343038  236655 start.go:364] duration metric: took 63.851µs to acquireMachinesLock for "kubernetes-upgrade-196539"
	I1020 12:39:36.343060  236655 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:39:36.343067  236655 fix.go:54] fixHost starting: 
	I1020 12:39:36.343276  236655 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-196539 --format={{.State.Status}}
	I1020 12:39:36.361573  236655 fix.go:112] recreateIfNeeded on kubernetes-upgrade-196539: state=Stopped err=<nil>
	W1020 12:39:36.361599  236655 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:39:33.972834  235059 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:39:33.973114  235059 start.go:159] libmachine.API.Create for "no-preload-649841" (driver="docker")
	I1020 12:39:33.973150  235059 client.go:168] LocalClient.Create starting
	I1020 12:39:33.973237  235059 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:39:33.973285  235059 main.go:141] libmachine: Decoding PEM data...
	I1020 12:39:33.973305  235059 main.go:141] libmachine: Parsing certificate...
	I1020 12:39:33.973373  235059 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:39:33.973396  235059 main.go:141] libmachine: Decoding PEM data...
	I1020 12:39:33.973408  235059 main.go:141] libmachine: Parsing certificate...
	I1020 12:39:33.973838  235059 cli_runner.go:164] Run: docker network inspect no-preload-649841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:39:34.002215  235059 cli_runner.go:211] docker network inspect no-preload-649841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:39:34.002303  235059 network_create.go:284] running [docker network inspect no-preload-649841] to gather additional debugging logs...
	I1020 12:39:34.002329  235059 cli_runner.go:164] Run: docker network inspect no-preload-649841
	W1020 12:39:34.028295  235059 cli_runner.go:211] docker network inspect no-preload-649841 returned with exit code 1
	I1020 12:39:34.028327  235059 network_create.go:287] error running [docker network inspect no-preload-649841]: docker network inspect no-preload-649841: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-649841 not found
	I1020 12:39:34.028345  235059 network_create.go:289] output of [docker network inspect no-preload-649841]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-649841 not found
	
	** /stderr **
	I1020 12:39:34.028452  235059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:39:34.054536  235059 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:39:34.055317  235059 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:39:34.056080  235059 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:39:34.056541  235059 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:39:34.057509  235059 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e83b60}
	I1020 12:39:34.057542  235059 network_create.go:124] attempt to create docker network no-preload-649841 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1020 12:39:34.057605  235059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-649841 no-preload-649841
	I1020 12:39:34.101973  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1020 12:39:34.104743  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1020 12:39:34.112249  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1020 12:39:34.115637  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1020 12:39:34.118245  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1020 12:39:34.136154  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1020 12:39:34.140514  235059 network_create.go:108] docker network no-preload-649841 192.168.85.0/24 created
	I1020 12:39:34.140546  235059 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-649841" container
	I1020 12:39:34.140615  235059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:39:34.146080  235059 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1020 12:39:34.163302  235059 cli_runner.go:164] Run: docker volume create no-preload-649841 --label name.minikube.sigs.k8s.io=no-preload-649841 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:39:34.186332  235059 oci.go:103] Successfully created a docker volume no-preload-649841
	I1020 12:39:34.186411  235059 cli_runner.go:164] Run: docker run --rm --name no-preload-649841-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-649841 --entrypoint /usr/bin/test -v no-preload-649841:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:39:34.225457  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1020 12:39:34.225483  235059 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 285.875564ms
	I1020 12:39:34.225495  235059 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 12:39:34.438833  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 12:39:34.438858  235059 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 499.292393ms
	I1020 12:39:34.438871  235059 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 12:39:34.655538  235059 oci.go:107] Successfully prepared a docker volume no-preload-649841
	I1020 12:39:34.655572  235059 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1020 12:39:34.655675  235059 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:39:34.655711  235059 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:39:34.655759  235059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:39:34.728378  235059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-649841 --name no-preload-649841 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-649841 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-649841 --network no-preload-649841 --ip 192.168.85.2 --volume no-preload-649841:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:39:35.037876  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Running}}
	I1020 12:39:35.060802  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:39:35.085481  235059 cli_runner.go:164] Run: docker exec no-preload-649841 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:39:35.137541  235059 oci.go:144] the created container "no-preload-649841" has a running status.
	I1020 12:39:35.137570  235059 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa...
	I1020 12:39:35.262265  235059 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:39:35.298893  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:39:35.329261  235059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:39:35.329319  235059 kic_runner.go:114] Args: [docker exec --privileged no-preload-649841 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:39:35.384732  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:39:35.407677  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 12:39:35.407710  235059 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.467974258s
	I1020 12:39:35.407725  235059 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 12:39:35.415302  235059 machine.go:93] provisionDockerMachine start ...
	I1020 12:39:35.415401  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:35.474089  235059 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:35.474466  235059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1020 12:39:35.474481  235059 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:39:35.595868  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 12:39:35.595906  235059 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.656354723s
	I1020 12:39:35.595923  235059 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 12:39:35.659884  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 12:39:35.659918  235059 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.720334027s
	I1020 12:39:35.659939  235059 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 12:39:35.683994  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 12:39:35.684019  235059 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.743960268s
	I1020 12:39:35.684033  235059 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 12:39:35.703987  235059 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-649841
	
	I1020 12:39:35.704011  235059 ubuntu.go:182] provisioning hostname "no-preload-649841"
	I1020 12:39:35.704071  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:35.725022  235059 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:35.725665  235059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1020 12:39:35.725742  235059 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-649841 && echo "no-preload-649841" | sudo tee /etc/hostname
	I1020 12:39:35.910972  235059 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-649841
	
	I1020 12:39:35.911054  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:35.931968  235059 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:35.932262  235059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1020 12:39:35.932296  235059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-649841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-649841/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-649841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:39:36.015899  235059 cache.go:157] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 12:39:36.015935  235059 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.075967232s
	I1020 12:39:36.015949  235059 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 12:39:36.015968  235059 cache.go:87] Successfully saved all images to host disk.
	I1020 12:39:36.097989  235059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:39:36.098023  235059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:39:36.098055  235059 ubuntu.go:190] setting up certificates
	I1020 12:39:36.098067  235059 provision.go:84] configureAuth start
	I1020 12:39:36.098111  235059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:39:36.121098  235059 provision.go:143] copyHostCerts
	I1020 12:39:36.121149  235059 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:39:36.121155  235059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:39:36.121217  235059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:39:36.121349  235059 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:39:36.121363  235059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:39:36.121420  235059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:39:36.121524  235059 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:39:36.121536  235059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:39:36.121574  235059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:39:36.121647  235059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.no-preload-649841 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-649841]
	I1020 12:39:36.546326  235059 provision.go:177] copyRemoteCerts
	I1020 12:39:36.546402  235059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:39:36.546439  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:36.566184  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:39:36.669265  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:39:36.689258  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:39:36.709024  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:39:36.727879  235059 provision.go:87] duration metric: took 629.79975ms to configureAuth
	I1020 12:39:36.727909  235059 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:39:36.728070  235059 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:39:36.728160  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:36.746406  235059 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:36.746718  235059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1020 12:39:36.746746  235059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:39:37.016432  235059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:39:37.016454  235059 machine.go:96] duration metric: took 1.60113053s to provisionDockerMachine
	I1020 12:39:37.016467  235059 client.go:171] duration metric: took 3.043310447s to LocalClient.Create
	I1020 12:39:37.016490  235059 start.go:167] duration metric: took 3.043377435s to libmachine.API.Create "no-preload-649841"
	I1020 12:39:37.016506  235059 start.go:293] postStartSetup for "no-preload-649841" (driver="docker")
	I1020 12:39:37.016523  235059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:39:37.016605  235059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:39:37.016680  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:37.035565  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:39:37.138711  235059 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:39:37.142492  235059 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:39:37.142528  235059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:39:37.142541  235059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:39:37.142610  235059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:39:37.142715  235059 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:39:37.142858  235059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:39:37.151087  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:39:37.171834  235059 start.go:296] duration metric: took 155.312369ms for postStartSetup
	I1020 12:39:37.172145  235059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:39:37.189916  235059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/config.json ...
	I1020 12:39:37.190236  235059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:39:37.190275  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:37.210742  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:39:37.308179  235059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:39:37.312762  235059 start.go:128] duration metric: took 3.342290489s to createHost
	I1020 12:39:37.312800  235059 start.go:83] releasing machines lock for "no-preload-649841", held for 3.342454733s
	I1020 12:39:37.312862  235059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:39:37.330350  235059 ssh_runner.go:195] Run: cat /version.json
	I1020 12:39:37.330400  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:37.330429  235059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:39:37.330496  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:39:37.348500  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:39:37.348748  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:39:37.500024  235059 ssh_runner.go:195] Run: systemctl --version
	I1020 12:39:37.506436  235059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:39:37.540263  235059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:39:37.545267  235059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:39:37.545342  235059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:39:37.571351  235059 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:39:37.571379  235059 start.go:495] detecting cgroup driver to use...
	I1020 12:39:37.571416  235059 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:39:37.571465  235059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:39:37.587614  235059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:39:37.599949  235059 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:39:37.600026  235059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:39:37.616872  235059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:39:37.634577  235059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:39:37.716295  235059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:39:37.803655  235059 docker.go:234] disabling docker service ...
	I1020 12:39:37.803713  235059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:39:37.822142  235059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:39:37.835882  235059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:39:37.913919  235059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:39:38.003586  235059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:39:38.017055  235059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:39:38.032901  235059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:39:38.032956  235059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.043490  235059 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:39:38.043558  235059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.053183  235059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.063057  235059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.072900  235059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:39:38.081129  235059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.090107  235059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.104341  235059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:38.113461  235059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:39:38.121607  235059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:39:38.131066  235059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:39:38.212468  235059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:39:38.314003  235059 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:39:38.314071  235059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:39:38.318152  235059 start.go:563] Will wait 60s for crictl version
	I1020 12:39:38.318210  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.321912  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:39:38.345652  235059 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:39:38.345739  235059 ssh_runner.go:195] Run: crio --version
	I1020 12:39:38.372375  235059 ssh_runner.go:195] Run: crio --version
	I1020 12:39:38.402019  235059 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:39:33.918961  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:34.419022  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:34.918599  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:35.419116  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:35.918978  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:36.418568  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:36.918423  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:37.418406  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:37.918396  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:38.418972  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:38.403286  235059 cli_runner.go:164] Run: docker network inspect no-preload-649841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:39:38.420833  235059 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:39:38.424680  235059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:39:38.435018  235059 kubeadm.go:883] updating cluster {Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:39:38.435164  235059 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:39:38.435214  235059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:39:38.460104  235059 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1020 12:39:38.460132  235059 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1020 12:39:38.460184  235059 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:38.460209  235059 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.460267  235059 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.460280  235059 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.460290  235059 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.460324  235059 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.460385  235059 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.460442  235059 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1020 12:39:38.461507  235059 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.461538  235059 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:38.461553  235059 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1020 12:39:38.461631  235059 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.461642  235059 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.461740  235059 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.461793  235059 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.462183  235059 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.589647  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.596478  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.598531  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.602309  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.605022  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.630444  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1020 12:39:38.630744  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.631315  235059 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1020 12:39:38.631367  235059 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.631411  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.642295  235059 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1020 12:39:38.642341  235059 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.642378  235059 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1020 12:39:38.642398  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.642414  235059 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.642457  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.650120  235059 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1020 12:39:38.650177  235059 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.650184  235059 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1020 12:39:38.650227  235059 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.650276  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.650230  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.671642  235059 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1020 12:39:38.671677  235059 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1020 12:39:38.671716  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:36.363664  236655 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-196539" ...
	I1020 12:39:36.363733  236655 cli_runner.go:164] Run: docker start kubernetes-upgrade-196539
	I1020 12:39:36.619231  236655 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-196539 --format={{.State.Status}}
	I1020 12:39:36.639014  236655 kic.go:430] container "kubernetes-upgrade-196539" state is running.
	I1020 12:39:36.639368  236655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:39:36.658534  236655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/config.json ...
	I1020 12:39:36.658830  236655 machine.go:93] provisionDockerMachine start ...
	I1020 12:39:36.658913  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:36.679317  236655 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:36.679622  236655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:39:36.679640  236655 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:39:36.680345  236655 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58990->127.0.0.1:33058: read: connection reset by peer
	I1020 12:39:39.825164  236655 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-196539
	
	I1020 12:39:39.825192  236655 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-196539"
	I1020 12:39:39.825250  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:39.846659  236655 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:39.846896  236655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:39:39.846911  236655 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-196539 && echo "kubernetes-upgrade-196539" | sudo tee /etc/hostname
	I1020 12:39:40.013464  236655 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-196539
	
	I1020 12:39:40.013547  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:40.033860  236655 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:40.034154  236655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:39:40.034177  236655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-196539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-196539/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-196539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:39:40.189976  236655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:39:40.190018  236655 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:39:40.190061  236655 ubuntu.go:190] setting up certificates
	I1020 12:39:40.190079  236655 provision.go:84] configureAuth start
	I1020 12:39:40.190130  236655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:39:40.212109  236655 provision.go:143] copyHostCerts
	I1020 12:39:40.212196  236655 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:39:40.212217  236655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:39:40.212388  236655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:39:40.212543  236655 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:39:40.212557  236655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:39:40.212606  236655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:39:40.212710  236655 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:39:40.212721  236655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:39:40.212761  236655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:39:40.212897  236655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-196539 san=[127.0.0.1 192.168.94.2 kubernetes-upgrade-196539 localhost minikube]
	I1020 12:39:40.499153  236655 provision.go:177] copyRemoteCerts
	I1020 12:39:40.499217  236655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:39:40.499260  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:40.519103  236655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:39:40.621577  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:39:40.639795  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1020 12:39:40.658301  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 12:39:40.676541  236655 provision.go:87] duration metric: took 486.450899ms to configureAuth
	I1020 12:39:40.676569  236655 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:39:40.676764  236655 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:39:40.676906  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:40.695410  236655 main.go:141] libmachine: Using SSH client type: native
	I1020 12:39:40.695641  236655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:39:40.695666  236655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:39:40.967817  236655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:39:40.967842  236655 machine.go:96] duration metric: took 4.308996052s to provisionDockerMachine
	I1020 12:39:40.967855  236655 start.go:293] postStartSetup for "kubernetes-upgrade-196539" (driver="docker")
	I1020 12:39:40.967869  236655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:39:40.967944  236655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:39:40.967998  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:40.989750  236655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:39:41.098274  236655 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:39:41.102973  236655 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:39:41.103019  236655 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:39:41.103033  236655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:39:41.103100  236655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:39:41.103207  236655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:39:41.103337  236655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:39:41.113291  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:39:41.140897  236655 start.go:296] duration metric: took 173.025077ms for postStartSetup
	I1020 12:39:41.140980  236655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:39:41.141061  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:38.919147  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:39.418849  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:39.918593  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:40.418961  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:40.919220  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:41.418983  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:41.918979  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:42.418978  228540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:39:42.512479  228540 kubeadm.go:1113] duration metric: took 11.69022713s to wait for elevateKubeSystemPrivileges
	I1020 12:39:42.512521  228540 kubeadm.go:402] duration metric: took 22.131433754s to StartCluster
	I1020 12:39:42.512546  228540 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:42.512626  228540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:39:42.513696  228540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:42.569557  228540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:39:42.569581  228540 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:39:42.569709  228540 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:39:42.569820  228540 config.go:182] Loaded profile config "old-k8s-version-384253": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 12:39:42.569839  228540 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-384253"
	I1020 12:39:42.569858  228540 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-384253"
	I1020 12:39:42.569886  228540 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-384253"
	I1020 12:39:42.569897  228540 host.go:66] Checking if "old-k8s-version-384253" exists ...
	I1020 12:39:42.569905  228540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-384253"
	I1020 12:39:42.570272  228540 cli_runner.go:164] Run: docker container inspect old-k8s-version-384253 --format={{.State.Status}}
	I1020 12:39:42.570455  228540 cli_runner.go:164] Run: docker container inspect old-k8s-version-384253 --format={{.State.Status}}
	I1020 12:39:42.637048  228540 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-384253"
	I1020 12:39:42.637104  228540 host.go:66] Checking if "old-k8s-version-384253" exists ...
	I1020 12:39:42.637642  228540 cli_runner.go:164] Run: docker container inspect old-k8s-version-384253 --format={{.State.Status}}
	I1020 12:39:42.638835  228540 out.go:179] * Verifying Kubernetes components...
	I1020 12:39:42.644382  228540 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:41.166430  236655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:39:41.269399  236655 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:39:41.275132  236655 fix.go:56] duration metric: took 4.932053574s for fixHost
	I1020 12:39:41.275162  236655 start.go:83] releasing machines lock for "kubernetes-upgrade-196539", held for 4.932111349s
	I1020 12:39:41.275235  236655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:39:41.294797  236655 ssh_runner.go:195] Run: cat /version.json
	I1020 12:39:41.294878  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:41.294880  236655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:39:41.294944  236655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:39:41.316398  236655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:39:41.317390  236655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:39:41.486052  236655 ssh_runner.go:195] Run: systemctl --version
	I1020 12:39:41.495687  236655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:39:41.543843  236655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:39:41.549480  236655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:39:41.549541  236655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:39:41.560058  236655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:39:41.560091  236655 start.go:495] detecting cgroup driver to use...
	I1020 12:39:41.560126  236655 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:39:41.560169  236655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:39:41.579947  236655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:39:41.595076  236655 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:39:41.595160  236655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:39:41.614493  236655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:39:41.629457  236655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:39:41.733999  236655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:39:41.841140  236655 docker.go:234] disabling docker service ...
	I1020 12:39:41.841212  236655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:39:41.858742  236655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:39:41.872160  236655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:39:41.967896  236655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:39:42.059464  236655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:39:42.076093  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:39:42.093403  236655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:39:42.093477  236655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.104340  236655 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:39:42.104404  236655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.117137  236655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.129430  236655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.140240  236655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:39:42.151113  236655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.161121  236655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.170070  236655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:39:42.179035  236655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:39:42.186732  236655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:39:42.195686  236655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:39:42.282433  236655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:39:43.039150  236655 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:39:43.039230  236655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:39:43.045043  236655 start.go:563] Will wait 60s for crictl version
	I1020 12:39:43.045106  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:39:43.051704  236655 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:39:43.094071  236655 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:39:43.094174  236655 ssh_runner.go:195] Run: crio --version
	I1020 12:39:43.138343  236655 ssh_runner.go:195] Run: crio --version
	I1020 12:39:43.183474  236655 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:39:42.647015  228540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:39:42.656905  228540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:39:42.658376  228540 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:39:42.658398  228540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:39:42.658457  228540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-384253
	I1020 12:39:42.664431  228540 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:39:42.664455  228540 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:39:42.664506  228540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-384253
	I1020 12:39:42.692942  228540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/old-k8s-version-384253/id_rsa Username:docker}
	I1020 12:39:42.692966  228540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/old-k8s-version-384253/id_rsa Username:docker}
	I1020 12:39:42.793169  228540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:39:42.816698  228540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:39:42.822302  228540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:39:43.071207  228540 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1020 12:39:43.072354  228540 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-384253" to be "Ready" ...
	I1020 12:39:43.295284  228540 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:39:43.298291  228540 addons.go:514] duration metric: took 728.573506ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:39:43.575736  228540 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-384253" context rescaled to 1 replicas
	I1020 12:39:38.722348  235059 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1020 12:39:38.722399  235059 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.722442  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:38.722445  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.722449  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.722493  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.722514  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.722530  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.722610  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 12:39:38.758584  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.758602  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.758626  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.759376  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.759449  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.759448  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 12:39:38.759517  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.796093  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 12:39:38.796184  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 12:39:38.796276  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.796564  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 12:39:38.799874  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 12:39:38.799945  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 12:39:38.800056  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 12:39:38.836193  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 12:39:38.836208  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1020 12:39:38.836296  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1020 12:39:38.836341  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1020 12:39:38.836296  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1020 12:39:38.836414  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 12:39:38.836427  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 12:39:38.836461  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1020 12:39:38.836550  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1020 12:39:38.840214  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1020 12:39:38.840292  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1020 12:39:38.840396  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1020 12:39:38.840468  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 12:39:38.861768  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1020 12:39:38.861818  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1020 12:39:38.861836  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1020 12:39:38.861860  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1020 12:39:38.861789  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1020 12:39:38.861907  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1020 12:39:38.861920  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1020 12:39:38.861962  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1020 12:39:38.861990  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1020 12:39:38.861998  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 12:39:38.862022  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1020 12:39:38.862047  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1020 12:39:38.862074  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1020 12:39:38.862093  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1020 12:39:38.872631  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1020 12:39:38.872668  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1020 12:39:38.930258  235059 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1020 12:39:38.930341  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1020 12:39:39.325184  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1020 12:39:39.325224  235059 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 12:39:39.325299  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 12:39:39.448618  235059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:40.520980  235059 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.195585876s)
	I1020 12:39:40.521001  235059 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.072347677s)
	I1020 12:39:40.521019  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1020 12:39:40.521037  235059 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1020 12:39:40.521051  235059 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1020 12:39:40.521076  235059 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:40.521081  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1020 12:39:40.521120  235059 ssh_runner.go:195] Run: which crictl
	I1020 12:39:40.526535  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:41.908248  235059 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.387142917s)
	I1020 12:39:41.908276  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1020 12:39:41.908304  235059 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 12:39:41.908366  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 12:39:41.908367  235059 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.381797541s)
	I1020 12:39:41.908434  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:43.552871  235059 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.64447772s)
	I1020 12:39:43.552902  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1020 12:39:43.552922  235059 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 12:39:43.552971  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 12:39:43.553079  235059 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.644625306s)
	I1020 12:39:43.553128  235059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:39:43.185051  236655 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-196539 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:39:43.207642  236655 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1020 12:39:43.212402  236655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:39:43.224812  236655 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:39:43.224945  236655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:39:43.225006  236655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:39:43.268135  236655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1020 12:39:43.268208  236655 ssh_runner.go:195] Run: which lz4
	I1020 12:39:43.273554  236655 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1020 12:39:43.278801  236655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1020 12:39:43.278835  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1020 12:39:44.464003  236655 crio.go:462] duration metric: took 1.190538574s to copy over tarball
	I1020 12:39:44.464099  236655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W1020 12:39:45.075856  228540 node_ready.go:57] node "old-k8s-version-384253" has "Ready":"False" status (will retry)
	W1020 12:39:47.077008  228540 node_ready.go:57] node "old-k8s-version-384253" has "Ready":"False" status (will retry)
	I1020 12:39:46.302721  235059 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.749719717s)
	I1020 12:39:46.302760  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1020 12:39:46.302828  235059 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 12:39:46.302887  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 12:39:46.302766  235059 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.749610223s)
	I1020 12:39:46.302967  235059 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1020 12:39:46.303081  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1020 12:39:47.943265  235059 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.640348495s)
	I1020 12:39:47.943300  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1020 12:39:47.943330  235059 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1020 12:39:47.943336  235059 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.640235382s)
	I1020 12:39:47.943368  235059 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1020 12:39:47.943388  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1020 12:39:47.943395  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1020 12:39:47.368198  236655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.904069011s)
	I1020 12:39:47.368223  236655 crio.go:469] duration metric: took 2.904175959s to extract the tarball
	I1020 12:39:47.368230  236655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1020 12:39:47.474662  236655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:39:47.508555  236655 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:39:47.508577  236655 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:39:47.508585  236655 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1020 12:39:47.508706  236655 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-196539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:39:47.508792  236655 ssh_runner.go:195] Run: crio config
	I1020 12:39:47.558879  236655 cni.go:84] Creating CNI manager for ""
	I1020 12:39:47.558901  236655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:39:47.558917  236655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:39:47.558945  236655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-196539 NodeName:kubernetes-upgrade-196539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:39:47.559108  236655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-196539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:39:47.559176  236655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:39:47.567624  236655 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:39:47.567681  236655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:39:47.576086  236655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1020 12:39:47.597950  236655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:39:47.615040  236655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1020 12:39:47.630426  236655 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:39:47.634713  236655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:39:47.648310  236655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:39:47.759638  236655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:39:47.784862  236655 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539 for IP: 192.168.94.2
	I1020 12:39:47.784890  236655 certs.go:195] generating shared ca certs ...
	I1020 12:39:47.784912  236655 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:47.785084  236655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:39:47.785127  236655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:39:47.785137  236655 certs.go:257] generating profile certs ...
	I1020 12:39:47.785236  236655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/client.key
	I1020 12:39:47.785301  236655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/apiserver.key.096e3b89
	I1020 12:39:47.785337  236655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/proxy-client.key
	I1020 12:39:47.785454  236655 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:39:47.785484  236655 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:39:47.785493  236655 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:39:47.785518  236655 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:39:47.785541  236655 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:39:47.785562  236655 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:39:47.785599  236655 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:39:47.786231  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:39:47.806123  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:39:47.827677  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:39:47.851480  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:39:47.875532  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1020 12:39:47.898808  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:39:47.917360  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:39:47.936007  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:39:47.957270  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:39:47.979124  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:39:48.001620  236655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:39:48.069883  236655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:39:48.084561  236655 ssh_runner.go:195] Run: openssl version
	I1020 12:39:48.091175  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:39:48.100755  236655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:39:48.105133  236655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:39:48.105193  236655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:39:48.145586  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:39:48.154812  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:39:48.164524  236655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:39:48.169220  236655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:39:48.169325  236655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:39:48.206409  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:39:48.215371  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:39:48.224457  236655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:39:48.228631  236655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:39:48.228697  236655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:39:48.268488  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:39:48.278492  236655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:39:48.283031  236655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:39:48.319256  236655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:39:48.395855  236655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:39:48.435706  236655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:39:48.539311  236655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:39:48.578292  236655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:39:48.615172  236655 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:39:48.615254  236655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:39:48.615311  236655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:39:48.646413  236655 cri.go:89] found id: ""
	I1020 12:39:48.646480  236655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:39:48.691816  236655 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:39:48.691837  236655 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:39:48.691888  236655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:39:48.700164  236655 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:39:48.700759  236655 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-196539" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:39:48.701210  236655 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-196539" cluster setting kubeconfig missing "kubernetes-upgrade-196539" context setting]
	I1020 12:39:48.701900  236655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:48.748482  236655 kapi.go:59] client config for kubernetes-upgrade-196539: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/client.key", CAFile:"/home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:39:48.748950  236655 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1020 12:39:48.748970  236655 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1020 12:39:48.748975  236655 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1020 12:39:48.748979  236655 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1020 12:39:48.748982  236655 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1020 12:39:48.749368  236655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:39:48.758550  236655 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-20 12:39:23.126583000 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-20 12:39:47.628278762 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.94.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-196539"
	   kubeletExtraArgs:
	-    node-ip: 192.168.94.2
	+    - name: "node-ip"
	+      value: "192.168.94.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.34.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1020 12:39:48.758569  236655 kubeadm.go:1160] stopping kube-system containers ...
	I1020 12:39:48.758582  236655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1020 12:39:48.758627  236655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:39:48.788583  236655 cri.go:89] found id: ""
	I1020 12:39:48.788655  236655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1020 12:39:48.838564  236655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:39:48.847602  236655 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 20 12:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 20 12:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Oct 20 12:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 20 12:39 /etc/kubernetes/scheduler.conf
	
	I1020 12:39:48.847668  236655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:39:48.856280  236655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:39:48.864493  236655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:39:48.872866  236655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:39:48.872920  236655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:39:48.894672  236655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:39:48.903662  236655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:39:48.903726  236655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:39:48.911517  236655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:39:48.965074  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:39:49.007992  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:39:50.622306  236655 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.614277178s)
	I1020 12:39:50.622389  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:39:50.831100  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:39:50.890289  236655 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:39:50.951105  236655 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:39:50.951175  236655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 12:39:49.576363  228540 node_ready.go:57] node "old-k8s-version-384253" has "Ready":"False" status (will retry)
	W1020 12:39:52.075354  228540 node_ready.go:57] node "old-k8s-version-384253" has "Ready":"False" status (will retry)
	I1020 12:39:51.092720  235059 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.149309817s)
	I1020 12:39:51.092745  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1020 12:39:51.092799  235059 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1020 12:39:51.092855  235059 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1020 12:39:51.680523  235059 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1020 12:39:51.680570  235059 cache_images.go:124] Successfully loaded all cached images
	I1020 12:39:51.680577  235059 cache_images.go:93] duration metric: took 13.22042767s to LoadCachedImages
	I1020 12:39:51.680592  235059 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:39:51.680699  235059 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-649841 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:39:51.680794  235059 ssh_runner.go:195] Run: crio config
	I1020 12:39:51.743949  235059 cni.go:84] Creating CNI manager for ""
	I1020 12:39:51.743974  235059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:39:51.743995  235059 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:39:51.744024  235059 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-649841 NodeName:no-preload-649841 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:39:51.744193  235059 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-649841"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:39:51.744261  235059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:39:51.753882  235059 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1020 12:39:51.753942  235059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1020 12:39:51.762649  235059 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1020 12:39:51.762678  235059 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1020 12:39:51.762689  235059 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1020 12:39:51.762756  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1020 12:39:51.767360  235059 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1020 12:39:51.767387  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1020 12:39:52.337788  235059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:39:52.351394  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1020 12:39:52.356087  235059 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1020 12:39:52.356126  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1020 12:39:52.675905  235059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1020 12:39:52.681197  235059 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1020 12:39:52.681230  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1020 12:39:52.866998  235059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:39:52.875947  235059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 12:39:52.889227  235059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:39:52.909587  235059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1020 12:39:52.922935  235059 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:39:52.927038  235059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:39:52.937244  235059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:39:53.019202  235059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:39:53.044541  235059 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841 for IP: 192.168.85.2
	I1020 12:39:53.044565  235059 certs.go:195] generating shared ca certs ...
	I1020 12:39:53.044584  235059 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:53.044723  235059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:39:53.044802  235059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:39:53.044818  235059 certs.go:257] generating profile certs ...
	I1020 12:39:53.044883  235059 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.key
	I1020 12:39:53.044899  235059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt with IP's: []
	I1020 12:39:53.359324  235059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt ...
	I1020 12:39:53.359356  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: {Name:mk6440547cf2fe5996ea08bf17043f6f28409904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:53.359544  235059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.key ...
	I1020 12:39:53.359561  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.key: {Name:mk5f517c55992bb7c9d195c799df655d93e0e138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:53.359667  235059 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key.f7062585
	I1020 12:39:53.359685  235059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt.f7062585 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1020 12:39:53.585488  235059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt.f7062585 ...
	I1020 12:39:53.585524  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt.f7062585: {Name:mkaada2d92e47b6497a754c26b95437d4a8347d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:53.585697  235059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key.f7062585 ...
	I1020 12:39:53.585712  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key.f7062585: {Name:mkfe4c1850f6e0f64ce28656b322d26697be2a21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:53.585845  235059 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt.f7062585 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt
	I1020 12:39:53.585949  235059 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key.f7062585 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key
	I1020 12:39:53.586050  235059 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key
	I1020 12:39:53.586072  235059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.crt with IP's: []
	I1020 12:39:51.451988  236655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:39:51.467572  236655 api_server.go:72] duration metric: took 516.467211ms to wait for apiserver process to appear ...
	I1020 12:39:51.467602  236655 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:39:51.467625  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:39:51.468019  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:39:51.968307  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1020 12:39:54.075908  228540 node_ready.go:57] node "old-k8s-version-384253" has "Ready":"False" status (will retry)
	W1020 12:39:56.575189  228540 node_ready.go:57] node "old-k8s-version-384253" has "Ready":"False" status (will retry)
	I1020 12:39:57.576397  228540 node_ready.go:49] node "old-k8s-version-384253" is "Ready"
	I1020 12:39:57.576430  228540 node_ready.go:38] duration metric: took 14.504035885s for node "old-k8s-version-384253" to be "Ready" ...
	I1020 12:39:57.576445  228540 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:39:57.576501  228540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:39:57.593500  228540 api_server.go:72] duration metric: took 15.023852993s to wait for apiserver process to appear ...
	I1020 12:39:57.593555  228540 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:39:57.593578  228540 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1020 12:39:57.601157  228540 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1020 12:39:57.602556  228540 api_server.go:141] control plane version: v1.28.0
	I1020 12:39:57.602582  228540 api_server.go:131] duration metric: took 9.020036ms to wait for apiserver health ...
	I1020 12:39:57.602590  228540 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:39:57.606915  228540 system_pods.go:59] 8 kube-system pods found
	I1020 12:39:57.606992  228540 system_pods.go:61] "coredns-5dd5756b68-c9869" [716187e3-ac87-4b80-9c9e-8506a57065fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:39:57.606999  228540 system_pods.go:61] "etcd-old-k8s-version-384253" [ce7a43e8-08f7-4e84-b1fe-86acabcce6f9] Running
	I1020 12:39:57.607005  228540 system_pods.go:61] "kindnet-tr8rl" [f79b052b-d1dd-4286-b20d-68cf4d168011] Running
	I1020 12:39:57.607010  228540 system_pods.go:61] "kube-apiserver-old-k8s-version-384253" [ed116150-0b3c-407b-a6dd-afb5be3ec36e] Running
	I1020 12:39:57.607015  228540 system_pods.go:61] "kube-controller-manager-old-k8s-version-384253" [30b70a74-4371-47aa-a2f2-96120d6e11d1] Running
	I1020 12:39:57.607019  228540 system_pods.go:61] "kube-proxy-qfvtm" [7a6f01c1-75f3-4e4d-9b4c-7591ec88957b] Running
	I1020 12:39:57.607024  228540 system_pods.go:61] "kube-scheduler-old-k8s-version-384253" [63157d7c-6a41-44b0-a270-29bcb74a9d24] Running
	I1020 12:39:57.607030  228540 system_pods.go:61] "storage-provisioner" [5d787059-9dff-4f2b-a0ed-7f579464768e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:39:57.607039  228540 system_pods.go:74] duration metric: took 4.442491ms to wait for pod list to return data ...
	I1020 12:39:57.607107  228540 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:39:57.609747  228540 default_sa.go:45] found service account: "default"
	I1020 12:39:57.609765  228540 default_sa.go:55] duration metric: took 2.649333ms for default service account to be created ...
	I1020 12:39:57.609810  228540 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:39:57.613448  228540 system_pods.go:86] 8 kube-system pods found
	I1020 12:39:57.613475  228540 system_pods.go:89] "coredns-5dd5756b68-c9869" [716187e3-ac87-4b80-9c9e-8506a57065fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:39:57.613480  228540 system_pods.go:89] "etcd-old-k8s-version-384253" [ce7a43e8-08f7-4e84-b1fe-86acabcce6f9] Running
	I1020 12:39:57.613485  228540 system_pods.go:89] "kindnet-tr8rl" [f79b052b-d1dd-4286-b20d-68cf4d168011] Running
	I1020 12:39:57.613488  228540 system_pods.go:89] "kube-apiserver-old-k8s-version-384253" [ed116150-0b3c-407b-a6dd-afb5be3ec36e] Running
	I1020 12:39:57.613492  228540 system_pods.go:89] "kube-controller-manager-old-k8s-version-384253" [30b70a74-4371-47aa-a2f2-96120d6e11d1] Running
	I1020 12:39:57.613496  228540 system_pods.go:89] "kube-proxy-qfvtm" [7a6f01c1-75f3-4e4d-9b4c-7591ec88957b] Running
	I1020 12:39:57.613499  228540 system_pods.go:89] "kube-scheduler-old-k8s-version-384253" [63157d7c-6a41-44b0-a270-29bcb74a9d24] Running
	I1020 12:39:57.613504  228540 system_pods.go:89] "storage-provisioner" [5d787059-9dff-4f2b-a0ed-7f579464768e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:39:57.613521  228540 retry.go:31] will retry after 210.762734ms: missing components: kube-dns
	I1020 12:39:57.828708  228540 system_pods.go:86] 8 kube-system pods found
	I1020 12:39:57.828747  228540 system_pods.go:89] "coredns-5dd5756b68-c9869" [716187e3-ac87-4b80-9c9e-8506a57065fa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:39:57.828755  228540 system_pods.go:89] "etcd-old-k8s-version-384253" [ce7a43e8-08f7-4e84-b1fe-86acabcce6f9] Running
	I1020 12:39:57.828763  228540 system_pods.go:89] "kindnet-tr8rl" [f79b052b-d1dd-4286-b20d-68cf4d168011] Running
	I1020 12:39:57.828782  228540 system_pods.go:89] "kube-apiserver-old-k8s-version-384253" [ed116150-0b3c-407b-a6dd-afb5be3ec36e] Running
	I1020 12:39:57.828790  228540 system_pods.go:89] "kube-controller-manager-old-k8s-version-384253" [30b70a74-4371-47aa-a2f2-96120d6e11d1] Running
	I1020 12:39:57.828795  228540 system_pods.go:89] "kube-proxy-qfvtm" [7a6f01c1-75f3-4e4d-9b4c-7591ec88957b] Running
	I1020 12:39:57.828803  228540 system_pods.go:89] "kube-scheduler-old-k8s-version-384253" [63157d7c-6a41-44b0-a270-29bcb74a9d24] Running
	I1020 12:39:57.828810  228540 system_pods.go:89] "storage-provisioner" [5d787059-9dff-4f2b-a0ed-7f579464768e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:39:57.828831  228540 retry.go:31] will retry after 379.98142ms: missing components: kube-dns
	I1020 12:39:58.213931  228540 system_pods.go:86] 8 kube-system pods found
	I1020 12:39:58.213962  228540 system_pods.go:89] "coredns-5dd5756b68-c9869" [716187e3-ac87-4b80-9c9e-8506a57065fa] Running
	I1020 12:39:58.213970  228540 system_pods.go:89] "etcd-old-k8s-version-384253" [ce7a43e8-08f7-4e84-b1fe-86acabcce6f9] Running
	I1020 12:39:58.213976  228540 system_pods.go:89] "kindnet-tr8rl" [f79b052b-d1dd-4286-b20d-68cf4d168011] Running
	I1020 12:39:58.213981  228540 system_pods.go:89] "kube-apiserver-old-k8s-version-384253" [ed116150-0b3c-407b-a6dd-afb5be3ec36e] Running
	I1020 12:39:58.213986  228540 system_pods.go:89] "kube-controller-manager-old-k8s-version-384253" [30b70a74-4371-47aa-a2f2-96120d6e11d1] Running
	I1020 12:39:58.213991  228540 system_pods.go:89] "kube-proxy-qfvtm" [7a6f01c1-75f3-4e4d-9b4c-7591ec88957b] Running
	I1020 12:39:58.213995  228540 system_pods.go:89] "kube-scheduler-old-k8s-version-384253" [63157d7c-6a41-44b0-a270-29bcb74a9d24] Running
	I1020 12:39:58.214000  228540 system_pods.go:89] "storage-provisioner" [5d787059-9dff-4f2b-a0ed-7f579464768e] Running
	I1020 12:39:58.214010  228540 system_pods.go:126] duration metric: took 604.1929ms to wait for k8s-apps to be running ...
	I1020 12:39:58.214023  228540 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:39:58.214071  228540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:39:58.227460  228540 system_svc.go:56] duration metric: took 13.426734ms WaitForService to wait for kubelet
	I1020 12:39:58.227489  228540 kubeadm.go:586] duration metric: took 15.657876235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:39:58.227512  228540 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:39:58.230216  228540 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:39:58.230245  228540 node_conditions.go:123] node cpu capacity is 8
	I1020 12:39:58.230258  228540 node_conditions.go:105] duration metric: took 2.740721ms to run NodePressure ...
	I1020 12:39:58.230277  228540 start.go:241] waiting for startup goroutines ...
	I1020 12:39:58.230289  228540 start.go:246] waiting for cluster config update ...
	I1020 12:39:58.230306  228540 start.go:255] writing updated cluster config ...
	I1020 12:39:58.230608  228540 ssh_runner.go:195] Run: rm -f paused
	I1020 12:39:58.234403  228540 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:39:58.241567  228540 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-c9869" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.246287  228540 pod_ready.go:94] pod "coredns-5dd5756b68-c9869" is "Ready"
	I1020 12:39:58.246309  228540 pod_ready.go:86] duration metric: took 4.71677ms for pod "coredns-5dd5756b68-c9869" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.249114  228540 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.253332  228540 pod_ready.go:94] pod "etcd-old-k8s-version-384253" is "Ready"
	I1020 12:39:58.253351  228540 pod_ready.go:86] duration metric: took 4.217585ms for pod "etcd-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.255847  228540 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.259726  228540 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-384253" is "Ready"
	I1020 12:39:58.259743  228540 pod_ready.go:86] duration metric: took 3.878868ms for pod "kube-apiserver-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.262501  228540 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:54.010388  235059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.crt ...
	I1020 12:39:54.010416  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.crt: {Name:mkd74aa2adaf25d7bc0eff1d8b535069fa148957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:54.010600  235059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key ...
	I1020 12:39:54.010618  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key: {Name:mk667aefb46836ec771cb54c6fa2fd011ae81c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:39:54.010873  235059 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:39:54.010929  235059 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:39:54.010944  235059 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:39:54.010978  235059 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:39:54.011011  235059 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:39:54.011044  235059 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:39:54.011110  235059 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:39:54.011683  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:39:54.030360  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:39:54.048100  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:39:54.066262  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:39:54.084333  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 12:39:54.101950  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:39:54.119761  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:39:54.139047  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:39:54.157723  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:39:54.177697  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:39:54.195356  235059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:39:54.212943  235059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:39:54.225416  235059 ssh_runner.go:195] Run: openssl version
	I1020 12:39:54.231759  235059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:39:54.240428  235059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:39:54.244315  235059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:39:54.244365  235059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:39:54.278922  235059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:39:54.287645  235059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:39:54.296321  235059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:39:54.300406  235059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:39:54.300451  235059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:39:54.336872  235059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:39:54.346253  235059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:39:54.355565  235059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:39:54.359755  235059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:39:54.359821  235059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:39:54.395098  235059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:39:54.404606  235059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:39:54.409560  235059 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:39:54.409621  235059 kubeadm.go:400] StartCluster: {Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:39:54.409733  235059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:39:54.409799  235059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:39:54.438037  235059 cri.go:89] found id: ""
	I1020 12:39:54.438108  235059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:39:54.446667  235059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:39:54.454811  235059 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:39:54.454883  235059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:39:54.462765  235059 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:39:54.462801  235059 kubeadm.go:157] found existing configuration files:
	
	I1020 12:39:54.462845  235059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:39:54.470407  235059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:39:54.470450  235059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:39:54.478071  235059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:39:54.485819  235059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:39:54.485870  235059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:39:54.493474  235059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:39:54.501596  235059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:39:54.501650  235059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:39:54.509833  235059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:39:54.517846  235059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:39:54.517907  235059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:39:54.525911  235059 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:39:54.592411  235059 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:39:54.652486  235059 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:39:58.638865  228540 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-384253" is "Ready"
	I1020 12:39:58.638890  228540 pod_ready.go:86] duration metric: took 376.371147ms for pod "kube-controller-manager-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:58.839763  228540 pod_ready.go:83] waiting for pod "kube-proxy-qfvtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:59.239354  228540 pod_ready.go:94] pod "kube-proxy-qfvtm" is "Ready"
	I1020 12:39:59.239378  228540 pod_ready.go:86] duration metric: took 399.578367ms for pod "kube-proxy-qfvtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:59.439287  228540 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:59.839579  228540 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-384253" is "Ready"
	I1020 12:39:59.839611  228540 pod_ready.go:86] duration metric: took 400.294599ms for pod "kube-scheduler-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:39:59.839625  228540 pod_ready.go:40] duration metric: took 1.605192065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:39:59.894169  228540 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1020 12:39:59.895853  228540 out.go:203] 
	W1020 12:39:59.897358  228540 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1020 12:39:59.898940  228540 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1020 12:39:59.901025  228540 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-384253" cluster and "default" namespace by default
	I1020 12:39:56.968632  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 12:39:56.968691  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:02.807073  235059 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:40:02.807135  235059 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:40:02.807233  235059 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:40:02.807295  235059 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:40:02.807326  235059 kubeadm.go:318] OS: Linux
	I1020 12:40:02.807366  235059 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:40:02.807422  235059 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:40:02.807465  235059 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:40:02.807511  235059 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:40:02.807553  235059 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:40:02.807632  235059 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:40:02.807737  235059 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:40:02.807798  235059 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:40:02.807862  235059 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:40:02.807984  235059 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:40:02.808120  235059 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:40:02.808221  235059 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:40:02.810065  235059 out.go:252]   - Generating certificates and keys ...
	I1020 12:40:02.810172  235059 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:40:02.810261  235059 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:40:02.810404  235059 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:40:02.810503  235059 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:40:02.810584  235059 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:40:02.810652  235059 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:40:02.810720  235059 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:40:02.810850  235059 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-649841] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:40:02.810910  235059 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:40:02.811070  235059 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-649841] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:40:02.811153  235059 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:40:02.811237  235059 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:40:02.811310  235059 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:40:02.811403  235059 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:40:02.811464  235059 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:40:02.811538  235059 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:40:02.811639  235059 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:40:02.811756  235059 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:40:02.811870  235059 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:40:02.812005  235059 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:40:02.812117  235059 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:40:02.813512  235059 out.go:252]   - Booting up control plane ...
	I1020 12:40:02.813596  235059 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:40:02.813662  235059 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:40:02.813744  235059 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:40:02.813889  235059 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:40:02.814020  235059 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:40:02.814116  235059 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:40:02.814193  235059 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:40:02.814235  235059 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:40:02.814347  235059 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:40:02.814450  235059 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:40:02.814528  235059 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.278989ms
	I1020 12:40:02.814633  235059 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:40:02.814714  235059 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1020 12:40:02.814877  235059 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:40:02.814990  235059 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:40:02.815113  235059 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005312829s
	I1020 12:40:02.815223  235059 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.861252169s
	I1020 12:40:02.815331  235059 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.50154969s
	I1020 12:40:02.815442  235059 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:40:02.815617  235059 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:40:02.815701  235059 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:40:02.815956  235059 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-649841 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:40:02.816032  235059 kubeadm.go:318] [bootstrap-token] Using token: 0gg2s1.uk3u8ow7iz7rpwtz
	I1020 12:40:02.818098  235059 out.go:252]   - Configuring RBAC rules ...
	I1020 12:40:02.818197  235059 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:40:02.818285  235059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:40:02.818454  235059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:40:02.818591  235059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:40:02.818704  235059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:40:02.818806  235059 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:40:02.818947  235059 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:40:02.818993  235059 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:40:02.819033  235059 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:40:02.819039  235059 kubeadm.go:318] 
	I1020 12:40:02.819098  235059 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:40:02.819103  235059 kubeadm.go:318] 
	I1020 12:40:02.819167  235059 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:40:02.819172  235059 kubeadm.go:318] 
	I1020 12:40:02.819193  235059 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:40:02.819244  235059 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:40:02.819288  235059 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:40:02.819307  235059 kubeadm.go:318] 
	I1020 12:40:02.819364  235059 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:40:02.819375  235059 kubeadm.go:318] 
	I1020 12:40:02.819417  235059 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:40:02.819423  235059 kubeadm.go:318] 
	I1020 12:40:02.819491  235059 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:40:02.819596  235059 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:40:02.819655  235059 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:40:02.819661  235059 kubeadm.go:318] 
	I1020 12:40:02.819731  235059 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:40:02.819838  235059 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:40:02.819847  235059 kubeadm.go:318] 
	I1020 12:40:02.819921  235059 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 0gg2s1.uk3u8ow7iz7rpwtz \
	I1020 12:40:02.820030  235059 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:40:02.820049  235059 kubeadm.go:318] 	--control-plane 
	I1020 12:40:02.820055  235059 kubeadm.go:318] 
	I1020 12:40:02.820127  235059 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:40:02.820133  235059 kubeadm.go:318] 
	I1020 12:40:02.820211  235059 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 0gg2s1.uk3u8ow7iz7rpwtz \
	I1020 12:40:02.820314  235059 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:40:02.820324  235059 cni.go:84] Creating CNI manager for ""
	I1020 12:40:02.820330  235059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:40:02.822723  235059 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:40:02.824088  235059 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:40:02.829485  235059 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:40:02.829505  235059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:40:02.843699  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:40:03.048941  235059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:40:03.049021  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-649841 minikube.k8s.io/updated_at=2025_10_20T12_40_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=no-preload-649841 minikube.k8s.io/primary=true
	I1020 12:40:03.049021  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:03.060097  235059 ops.go:34] apiserver oom_adj: -16
	I1020 12:40:03.124431  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:03.624756  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:01.969927  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 12:40:01.969966  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:04.125201  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:04.624878  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:05.125290  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:05.625423  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:06.125375  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:06.624659  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:07.124978  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:07.624493  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:08.124997  235059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:40:08.220892  235059 kubeadm.go:1113] duration metric: took 5.171941489s to wait for elevateKubeSystemPrivileges
	I1020 12:40:08.220927  235059 kubeadm.go:402] duration metric: took 13.811309325s to StartCluster
	I1020 12:40:08.220948  235059 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:08.221021  235059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:08.223057  235059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:08.223989  235059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:40:08.224111  235059 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:40:08.224291  235059 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:08.224358  235059 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:40:08.224455  235059 addons.go:69] Setting storage-provisioner=true in profile "no-preload-649841"
	I1020 12:40:08.224475  235059 addons.go:238] Setting addon storage-provisioner=true in "no-preload-649841"
	I1020 12:40:08.224514  235059 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:08.224940  235059 addons.go:69] Setting default-storageclass=true in profile "no-preload-649841"
	I1020 12:40:08.224968  235059 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-649841"
	I1020 12:40:08.225158  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:08.225204  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:08.225990  235059 out.go:179] * Verifying Kubernetes components...
	I1020 12:40:08.230921  235059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:08.254124  235059 addons.go:238] Setting addon default-storageclass=true in "no-preload-649841"
	I1020 12:40:08.254171  235059 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:08.254692  235059 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:08.256217  235059 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:40:08.257900  235059 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:40:08.257919  235059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:40:08.257980  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:08.292149  235059 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:40:08.292175  235059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:40:08.292235  235059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:08.294882  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:08.318130  235059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:08.347615  235059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:40:08.417599  235059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:40:08.427816  235059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:40:08.452260  235059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:40:08.572193  235059 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 12:40:08.573511  235059 node_ready.go:35] waiting up to 6m0s for node "no-preload-649841" to be "Ready" ...
	I1020 12:40:08.813332  235059 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 20 12:39:57 old-k8s-version-384253 crio[774]: time="2025-10-20T12:39:57.545172601Z" level=info msg="Starting container: 0c220f264ee7b7592cd59183847d72d333bfd741678a237ceed474621ace679a" id=28025f0b-a31a-4913-b7ae-ea528c98e692 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:39:57 old-k8s-version-384253 crio[774]: time="2025-10-20T12:39:57.547082604Z" level=info msg="Started container" PID=2123 containerID=0c220f264ee7b7592cd59183847d72d333bfd741678a237ceed474621ace679a description=kube-system/coredns-5dd5756b68-c9869/coredns id=28025f0b-a31a-4913-b7ae-ea528c98e692 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed128076bd25cc022e3731778ab30696bd7ea8bd072f4332b797ee02f0ecd1da
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.349943825Z" level=info msg="Running pod sandbox: default/busybox/POD" id=84dcaef9-cd44-4d55-ae3b-7781bdeda597 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.350066028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.467509139Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d1938a72406488b50cb2c2088ccfa0ee93b83e6ac968ad6f62692c1a36968de9 UID:d9370c1f-a3cd-4443-a78d-24bb86844f37 NetNS:/var/run/netns/75ee988e-65aa-41db-a890-312b9983dd4c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004169e8}] Aliases:map[]}"
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.46754671Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.477337296Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d1938a72406488b50cb2c2088ccfa0ee93b83e6ac968ad6f62692c1a36968de9 UID:d9370c1f-a3cd-4443-a78d-24bb86844f37 NetNS:/var/run/netns/75ee988e-65aa-41db-a890-312b9983dd4c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004169e8}] Aliases:map[]}"
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.477463981Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.478243266Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.479379062Z" level=info msg="Ran pod sandbox d1938a72406488b50cb2c2088ccfa0ee93b83e6ac968ad6f62692c1a36968de9 with infra container: default/busybox/POD" id=84dcaef9-cd44-4d55-ae3b-7781bdeda597 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.48060114Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8074759-11ab-4d18-a970-4007819ee440 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.480714324Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b8074759-11ab-4d18-a970-4007819ee440 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.480763141Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b8074759-11ab-4d18-a970-4007819ee440 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.4812314Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fcef77c6-9a66-4533-8cbc-a58901f6888c name=/runtime.v1.ImageService/PullImage
	Oct 20 12:40:00 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:00.48256675Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.907714988Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fcef77c6-9a66-4533-8cbc-a58901f6888c name=/runtime.v1.ImageService/PullImage
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.908625012Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5f46a12c-6ad0-4005-a082-36220aeeefa9 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.91019816Z" level=info msg="Creating container: default/busybox/busybox" id=f1981349-7ec0-41b8-a0b7-98d2a154e813 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.910324535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.914520754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.914995581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.939461001Z" level=info msg="Created container e4c7f2b0fe24ad33981f96530d500b3d08b0f27163063a65f542bdd7d7799702: default/busybox/busybox" id=f1981349-7ec0-41b8-a0b7-98d2a154e813 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.940086448Z" level=info msg="Starting container: e4c7f2b0fe24ad33981f96530d500b3d08b0f27163063a65f542bdd7d7799702" id=04ea4079-9eb8-43a2-b7fb-ce6decdd0b8c name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:40:01 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:01.942206678Z" level=info msg="Started container" PID=2197 containerID=e4c7f2b0fe24ad33981f96530d500b3d08b0f27163063a65f542bdd7d7799702 description=default/busybox/busybox id=04ea4079-9eb8-43a2-b7fb-ce6decdd0b8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1938a72406488b50cb2c2088ccfa0ee93b83e6ac968ad6f62692c1a36968de9
	Oct 20 12:40:09 old-k8s-version-384253 crio[774]: time="2025-10-20T12:40:09.159747251Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e4c7f2b0fe24a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   d1938a7240648       busybox                                          default
	0c220f264ee7b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   ed128076bd25c       coredns-5dd5756b68-c9869                         kube-system
	8ae1ce3d59a94       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   c326142a21941       storage-provisioner                              kube-system
	db986ec58c4ff       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   9df77a8881bea       kindnet-tr8rl                                    kube-system
	2a44a56e3b4ac       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   8b012e6e25e72       kube-proxy-qfvtm                                 kube-system
	6697530272dbe       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   c1ce76e392f97       kube-apiserver-old-k8s-version-384253            kube-system
	d18ed844a688f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   767d15fbb8b0a       etcd-old-k8s-version-384253                      kube-system
	068d2b53b0238       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   ac42159871560       kube-scheduler-old-k8s-version-384253            kube-system
	af1f5e2fb62d6       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   0bca27282d925       kube-controller-manager-old-k8s-version-384253   kube-system
	
	
	==> coredns [0c220f264ee7b7592cd59183847d72d333bfd741678a237ceed474621ace679a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59533 - 21464 "HINFO IN 7569685396313791707.1412617153053569137. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.423842956s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-384253
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-384253
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=old-k8s-version-384253
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_39_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-384253
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:40:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:40:00 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:40:00 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:40:00 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:40:00 +0000   Mon, 20 Oct 2025 12:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-384253
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b6451977-b7d8-4840-89f0-12d79aaa4949
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-c9869                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-384253                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-tr8rl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-384253             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-384253    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-qfvtm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-384253             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-384253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-384253 event: Registered Node old-k8s-version-384253 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-384253 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [d18ed844a688f21568b732fed4b3a461b73b78f754956bf4c0429e9570bd236b] <==
	{"level":"info","ts":"2025-10-20T12:39:25.267843Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:39:25.264908Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:39:25.265107Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T12:39:25.271911Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-20T12:39:25.272026Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:39:25.272073Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:39:25.273272Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-20T12:39:25.27771Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-20T12:39:42.475538Z","caller":"traceutil/trace.go:171","msg":"trace[594710055] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"143.313579ms","start":"2025-10-20T12:39:42.332201Z","end":"2025-10-20T12:39:42.475514Z","steps":["trace[594710055] 'process raft request'  (duration: 110.350263ms)","trace[594710055] 'compare'  (duration: 32.842824ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:39:42.47553Z","caller":"traceutil/trace.go:171","msg":"trace[1116336813] linearizableReadLoop","detail":"{readStateIndex:336; appliedIndex:334; }","duration":"131.322042ms","start":"2025-10-20T12:39:42.344183Z","end":"2025-10-20T12:39:42.475505Z","steps":["trace[1116336813] 'read index received'  (duration: 44.10677ms)","trace[1116336813] 'applied index is now lower than readState.Index'  (duration: 87.213859ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:39:42.475671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.489301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-10-20T12:39:42.475752Z","caller":"traceutil/trace.go:171","msg":"trace[1462018503] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:324; }","duration":"131.586822ms","start":"2025-10-20T12:39:42.34415Z","end":"2025-10-20T12:39:42.475737Z","steps":["trace[1462018503] 'agreement among raft nodes before linearized reading'  (duration: 131.425523ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:39:42.645133Z","caller":"traceutil/trace.go:171","msg":"trace[198970496] linearizableReadLoop","detail":"{readStateIndex:345; appliedIndex:342; }","duration":"134.566093ms","start":"2025-10-20T12:39:42.51055Z","end":"2025-10-20T12:39:42.645116Z","steps":["trace[198970496] 'read index received'  (duration: 59.205559ms)","trace[198970496] 'applied index is now lower than readState.Index'  (duration: 75.359862ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:39:42.645197Z","caller":"traceutil/trace.go:171","msg":"trace[1351947301] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"135.50901ms","start":"2025-10-20T12:39:42.509673Z","end":"2025-10-20T12:39:42.645182Z","steps":["trace[1351947301] 'process raft request'  (duration: 134.760146ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:39:42.645207Z","caller":"traceutil/trace.go:171","msg":"trace[1925435748] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"134.87696ms","start":"2025-10-20T12:39:42.51031Z","end":"2025-10-20T12:39:42.645187Z","steps":["trace[1925435748] 'process raft request'  (duration: 134.750848ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:39:42.645298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.736809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-20T12:39:42.645338Z","caller":"traceutil/trace.go:171","msg":"trace[1622539060] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:333; }","duration":"134.806626ms","start":"2025-10-20T12:39:42.510523Z","end":"2025-10-20T12:39:42.64533Z","steps":["trace[1622539060] 'agreement among raft nodes before linearized reading'  (duration: 134.699183ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:39:42.898693Z","caller":"traceutil/trace.go:171","msg":"trace[1816060036] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"240.739498ms","start":"2025-10-20T12:39:42.657924Z","end":"2025-10-20T12:39:42.898664Z","steps":["trace[1816060036] 'process raft request'  (duration: 143.306385ms)","trace[1816060036] 'compare'  (duration: 97.252636ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:39:42.898758Z","caller":"traceutil/trace.go:171","msg":"trace[611097026] linearizableReadLoop","detail":"{readStateIndex:348; appliedIndex:346; }","duration":"146.495784ms","start":"2025-10-20T12:39:42.752249Z","end":"2025-10-20T12:39:42.898745Z","steps":["trace[611097026] 'read index received'  (duration: 48.992798ms)","trace[611097026] 'applied index is now lower than readState.Index'  (duration: 97.502138ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:39:42.898791Z","caller":"traceutil/trace.go:171","msg":"trace[895091373] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"240.570365ms","start":"2025-10-20T12:39:42.65819Z","end":"2025-10-20T12:39:42.898761Z","steps":["trace[895091373] 'process raft request'  (duration: 240.397587ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:39:42.898908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.67472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2025-10-20T12:39:42.898946Z","caller":"traceutil/trace.go:171","msg":"trace[1380589759] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:336; }","duration":"146.729458ms","start":"2025-10-20T12:39:42.752206Z","end":"2025-10-20T12:39:42.898936Z","steps":["trace[1380589759] 'agreement among raft nodes before linearized reading'  (duration: 146.592155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:39:47.35003Z","caller":"traceutil/trace.go:171","msg":"trace[1172141461] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"177.923591ms","start":"2025-10-20T12:39:47.172084Z","end":"2025-10-20T12:39:47.350008Z","steps":["trace[1172141461] 'process raft request'  (duration: 114.428116ms)","trace[1172141461] 'compare'  (duration: 63.407542ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:39:47.598188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.920601ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789458889715873 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:313 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:39:47.598379Z","caller":"traceutil/trace.go:171","msg":"trace[1877241510] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"242.461206ms","start":"2025-10-20T12:39:47.355894Z","end":"2025-10-20T12:39:47.598355Z","steps":["trace[1877241510] 'process raft request'  (duration: 117.795713ms)","trace[1877241510] 'compare'  (duration: 123.801749ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:40:10 up  1:22,  0 user,  load average: 5.38, 3.78, 2.09
	Linux old-k8s-version-384253 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [db986ec58c4ff9feb45d4dc5b26dc5e589871df7e64c028587cc919c478910a8] <==
	I1020 12:39:46.505224       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:39:46.505575       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1020 12:39:46.505731       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:39:46.505754       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:39:46.505797       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:39:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:39:46.801791       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:39:46.801822       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:39:46.801832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:39:46.801985       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:39:47.106824       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:39:47.106958       1 metrics.go:72] Registering metrics
	I1020 12:39:47.107035       1 controller.go:711] "Syncing nftables rules"
	I1020 12:39:56.808956       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:39:56.809012       1 main.go:301] handling current node
	I1020 12:40:06.804844       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:40:06.804869       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6697530272dbe4b4d39026a76740ec5739863b25c2ab8a0219f7b090998211cf] <==
	I1020 12:39:26.666013       1 aggregator.go:166] initial CRD sync complete...
	I1020 12:39:26.666029       1 autoregister_controller.go:141] Starting autoregister controller
	I1020 12:39:26.666034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:39:26.666041       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:39:26.666069       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:39:26.667433       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1020 12:39:26.667460       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1020 12:39:26.667962       1 controller.go:624] quota admission added evaluator for: namespaces
	E1020 12:39:26.678137       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1020 12:39:26.881301       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:39:27.571443       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:39:27.575976       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:39:27.576048       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:39:28.017206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:39:28.065370       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:39:28.180209       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:39:28.186536       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1020 12:39:28.187850       1 controller.go:624] quota admission added evaluator for: endpoints
	I1020 12:39:28.193456       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:39:28.642012       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1020 12:39:29.967111       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1020 12:39:29.977622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:39:29.989048       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1020 12:39:42.098932       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1020 12:39:42.489927       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [af1f5e2fb62d6b5fb807f8fb9835715e8723cba41f3b2887fbbf2ad635147b0f] <==
	I1020 12:39:41.654060       1 shared_informer.go:318] Caches are synced for disruption
	I1020 12:39:41.696867       1 shared_informer.go:318] Caches are synced for deployment
	I1020 12:39:41.700115       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 12:39:42.019650       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 12:39:42.066257       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 12:39:42.066290       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1020 12:39:42.109111       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qfvtm"
	I1020 12:39:42.113188       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tr8rl"
	I1020 12:39:42.507267       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1020 12:39:42.902386       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-q8sf6"
	I1020 12:39:42.912843       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-c9869"
	I1020 12:39:42.926204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="419.538506ms"
	I1020 12:39:42.938341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.856709ms"
	I1020 12:39:42.938475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.89µs"
	I1020 12:39:42.941613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.29µs"
	I1020 12:39:43.106729       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1020 12:39:43.119007       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-q8sf6"
	I1020 12:39:43.128964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.635994ms"
	I1020 12:39:43.138508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.493926ms"
	I1020 12:39:43.138731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.993µs"
	I1020 12:39:57.182916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.386µs"
	I1020 12:39:57.193130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.762µs"
	I1020 12:39:58.182603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.430151ms"
	I1020 12:39:58.183052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.409µs"
	I1020 12:40:01.550984       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [2a44a56e3b4ac514645060b359fdd094595b70132ffca2e69135631dd289e8d1] <==
	I1020 12:39:43.151647       1 server_others.go:69] "Using iptables proxy"
	I1020 12:39:43.163415       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1020 12:39:43.186808       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:39:43.195696       1 server_others.go:152] "Using iptables Proxier"
	I1020 12:39:43.195740       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1020 12:39:43.195746       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1020 12:39:43.195804       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1020 12:39:43.196001       1 server.go:846] "Version info" version="v1.28.0"
	I1020 12:39:43.196028       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:39:43.196702       1 config.go:315] "Starting node config controller"
	I1020 12:39:43.196789       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1020 12:39:43.196807       1 config.go:97] "Starting endpoint slice config controller"
	I1020 12:39:43.196823       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1020 12:39:43.197168       1 config.go:188] "Starting service config controller"
	I1020 12:39:43.197182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1020 12:39:43.297119       1 shared_informer.go:318] Caches are synced for node config
	I1020 12:39:43.297210       1 shared_informer.go:318] Caches are synced for service config
	I1020 12:39:43.297266       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [068d2b53b0238dbd2e82991b27599ed1788e506b931cb022255334d88a67a25f] <==
	W1020 12:39:26.642167       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1020 12:39:26.642184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1020 12:39:26.642225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1020 12:39:26.642240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1020 12:39:26.643080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1020 12:39:26.643093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1020 12:39:26.643100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1020 12:39:26.643121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1020 12:39:27.501895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1020 12:39:27.501934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1020 12:39:27.554960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1020 12:39:27.554998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1020 12:39:27.694037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1020 12:39:27.694159       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1020 12:39:27.727679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1020 12:39:27.727713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1020 12:39:27.808546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1020 12:39:27.808581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1020 12:39:27.822267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1020 12:39:27.822377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1020 12:39:27.869207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1020 12:39:27.869339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1020 12:39:27.935285       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1020 12:39:27.935406       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1020 12:39:30.436533       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: I1020 12:39:42.213736    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vgr\" (UniqueName: \"kubernetes.io/projected/f79b052b-d1dd-4286-b20d-68cf4d168011-kube-api-access-p6vgr\") pod \"kindnet-tr8rl\" (UID: \"f79b052b-d1dd-4286-b20d-68cf4d168011\") " pod="kube-system/kindnet-tr8rl"
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: I1020 12:39:42.213756    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a6f01c1-75f3-4e4d-9b4c-7591ec88957b-kube-proxy\") pod \"kube-proxy-qfvtm\" (UID: \"7a6f01c1-75f3-4e4d-9b4c-7591ec88957b\") " pod="kube-system/kube-proxy-qfvtm"
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: I1020 12:39:42.213875    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f79b052b-d1dd-4286-b20d-68cf4d168011-lib-modules\") pod \"kindnet-tr8rl\" (UID: \"f79b052b-d1dd-4286-b20d-68cf4d168011\") " pod="kube-system/kindnet-tr8rl"
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: I1020 12:39:42.213942    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a6f01c1-75f3-4e4d-9b4c-7591ec88957b-xtables-lock\") pod \"kube-proxy-qfvtm\" (UID: \"7a6f01c1-75f3-4e4d-9b4c-7591ec88957b\") " pod="kube-system/kube-proxy-qfvtm"
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: I1020 12:39:42.213969    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hp2\" (UniqueName: \"kubernetes.io/projected/7a6f01c1-75f3-4e4d-9b4c-7591ec88957b-kube-api-access-p4hp2\") pod \"kube-proxy-qfvtm\" (UID: \"7a6f01c1-75f3-4e4d-9b4c-7591ec88957b\") " pod="kube-system/kube-proxy-qfvtm"
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: E1020 12:39:42.330233    1375 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: E1020 12:39:42.330277    1375 projected.go:198] Error preparing data for projected volume kube-api-access-p4hp2 for pod kube-system/kube-proxy-qfvtm: configmap "kube-root-ca.crt" not found
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: E1020 12:39:42.330234    1375 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: E1020 12:39:42.330303    1375 projected.go:198] Error preparing data for projected volume kube-api-access-p6vgr for pod kube-system/kindnet-tr8rl: configmap "kube-root-ca.crt" not found
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: E1020 12:39:42.330388    1375 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a6f01c1-75f3-4e4d-9b4c-7591ec88957b-kube-api-access-p4hp2 podName:7a6f01c1-75f3-4e4d-9b4c-7591ec88957b nodeName:}" failed. No retries permitted until 2025-10-20 12:39:42.830358181 +0000 UTC m=+12.891156128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p4hp2" (UniqueName: "kubernetes.io/projected/7a6f01c1-75f3-4e4d-9b4c-7591ec88957b-kube-api-access-p4hp2") pod "kube-proxy-qfvtm" (UID: "7a6f01c1-75f3-4e4d-9b4c-7591ec88957b") : configmap "kube-root-ca.crt" not found
	Oct 20 12:39:42 old-k8s-version-384253 kubelet[1375]: E1020 12:39:42.330411    1375 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f79b052b-d1dd-4286-b20d-68cf4d168011-kube-api-access-p6vgr podName:f79b052b-d1dd-4286-b20d-68cf4d168011 nodeName:}" failed. No retries permitted until 2025-10-20 12:39:42.830400115 +0000 UTC m=+12.891198057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6vgr" (UniqueName: "kubernetes.io/projected/f79b052b-d1dd-4286-b20d-68cf4d168011-kube-api-access-p6vgr") pod "kindnet-tr8rl" (UID: "f79b052b-d1dd-4286-b20d-68cf4d168011") : configmap "kube-root-ca.crt" not found
	Oct 20 12:39:47 old-k8s-version-384253 kubelet[1375]: I1020 12:39:47.351967    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qfvtm" podStartSLOduration=5.351909668 podCreationTimestamp="2025-10-20 12:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:39:43.127188019 +0000 UTC m=+13.187985965" watchObservedRunningTime="2025-10-20 12:39:47.351909668 +0000 UTC m=+17.412707617"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.161201    1375 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.182815    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tr8rl" podStartSLOduration=11.935007826 podCreationTimestamp="2025-10-20 12:39:42 +0000 UTC" firstStartedPulling="2025-10-20 12:39:43.040668999 +0000 UTC m=+13.101466937" lastFinishedPulling="2025-10-20 12:39:46.288402463 +0000 UTC m=+16.349200401" observedRunningTime="2025-10-20 12:39:47.351643244 +0000 UTC m=+17.412441191" watchObservedRunningTime="2025-10-20 12:39:57.18274129 +0000 UTC m=+27.243539240"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.183199    1375 topology_manager.go:215] "Topology Admit Handler" podUID="716187e3-ac87-4b80-9c9e-8506a57065fa" podNamespace="kube-system" podName="coredns-5dd5756b68-c9869"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.184069    1375 topology_manager.go:215] "Topology Admit Handler" podUID="5d787059-9dff-4f2b-a0ed-7f579464768e" podNamespace="kube-system" podName="storage-provisioner"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.231017    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/716187e3-ac87-4b80-9c9e-8506a57065fa-config-volume\") pod \"coredns-5dd5756b68-c9869\" (UID: \"716187e3-ac87-4b80-9c9e-8506a57065fa\") " pod="kube-system/coredns-5dd5756b68-c9869"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.231072    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqf5\" (UniqueName: \"kubernetes.io/projected/5d787059-9dff-4f2b-a0ed-7f579464768e-kube-api-access-dsqf5\") pod \"storage-provisioner\" (UID: \"5d787059-9dff-4f2b-a0ed-7f579464768e\") " pod="kube-system/storage-provisioner"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.231202    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpg58\" (UniqueName: \"kubernetes.io/projected/716187e3-ac87-4b80-9c9e-8506a57065fa-kube-api-access-vpg58\") pod \"coredns-5dd5756b68-c9869\" (UID: \"716187e3-ac87-4b80-9c9e-8506a57065fa\") " pod="kube-system/coredns-5dd5756b68-c9869"
	Oct 20 12:39:57 old-k8s-version-384253 kubelet[1375]: I1020 12:39:57.231278    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d787059-9dff-4f2b-a0ed-7f579464768e-tmp\") pod \"storage-provisioner\" (UID: \"5d787059-9dff-4f2b-a0ed-7f579464768e\") " pod="kube-system/storage-provisioner"
	Oct 20 12:39:58 old-k8s-version-384253 kubelet[1375]: I1020 12:39:58.173989    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.173945687 podCreationTimestamp="2025-10-20 12:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:39:58.161161559 +0000 UTC m=+28.221959505" watchObservedRunningTime="2025-10-20 12:39:58.173945687 +0000 UTC m=+28.234743631"
	Oct 20 12:40:00 old-k8s-version-384253 kubelet[1375]: I1020 12:40:00.047825    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c9869" podStartSLOduration=18.047725369 podCreationTimestamp="2025-10-20 12:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:39:58.174187359 +0000 UTC m=+28.234985305" watchObservedRunningTime="2025-10-20 12:40:00.047725369 +0000 UTC m=+30.108523318"
	Oct 20 12:40:00 old-k8s-version-384253 kubelet[1375]: I1020 12:40:00.048358    1375 topology_manager.go:215] "Topology Admit Handler" podUID="d9370c1f-a3cd-4443-a78d-24bb86844f37" podNamespace="default" podName="busybox"
	Oct 20 12:40:00 old-k8s-version-384253 kubelet[1375]: I1020 12:40:00.148431    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vphlj\" (UniqueName: \"kubernetes.io/projected/d9370c1f-a3cd-4443-a78d-24bb86844f37-kube-api-access-vphlj\") pod \"busybox\" (UID: \"d9370c1f-a3cd-4443-a78d-24bb86844f37\") " pod="default/busybox"
	Oct 20 12:40:02 old-k8s-version-384253 kubelet[1375]: I1020 12:40:02.170955    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.743759723 podCreationTimestamp="2025-10-20 12:40:00 +0000 UTC" firstStartedPulling="2025-10-20 12:40:00.480949674 +0000 UTC m=+30.541747599" lastFinishedPulling="2025-10-20 12:40:01.908092043 +0000 UTC m=+31.968889988" observedRunningTime="2025-10-20 12:40:02.170828894 +0000 UTC m=+32.231626841" watchObservedRunningTime="2025-10-20 12:40:02.170902112 +0000 UTC m=+32.231700057"
	
	
	==> storage-provisioner [8ae1ce3d59a94955e51355cbdf4c233da71f7e3f06cdfb4cd59df9659168959c] <==
	I1020 12:39:57.549762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:39:57.560119       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:39:57.560166       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 12:39:57.571884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:39:57.572105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-384253_82d5afaf-81a8-441f-b469-2afc5e3b4fe2!
	I1020 12:39:57.572072       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a192978-e7b4-438b-8996-16ddc24fec6e", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-384253_82d5afaf-81a8-441f-b469-2afc5e3b4fe2 became leader
	I1020 12:39:57.672996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-384253_82d5afaf-81a8-441f-b469-2afc5e3b4fe2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384253 -n old-k8s-version-384253
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-384253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (244.877059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:40:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-649841 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-649841 describe deploy/metrics-server -n kube-system: exit status 1 (61.111198ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-649841 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-649841
helpers_test.go:243: (dbg) docker inspect no-preload-649841:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a",
	        "Created": "2025-10-20T12:39:34.746845301Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:39:34.782171986Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/hosts",
	        "LogPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a-json.log",
	        "Name": "/no-preload-649841",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-649841:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-649841",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a",
	                "LowerDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-649841",
	                "Source": "/var/lib/docker/volumes/no-preload-649841/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-649841",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-649841",
	                "name.minikube.sigs.k8s.io": "no-preload-649841",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7b6531a90af4d96de0a99f9a687713a9e975de12b98ef3db7a059cf98ddfc9c",
	            "SandboxKey": "/var/run/docker/netns/e7b6531a90af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-649841": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:e4:da:ab:f2:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6720b99a1b6d91a202341926290513ef2c609bf0485dc9d73b76615c6b693c13",
	                    "EndpointID": "6ebfb55cb456dabad72fb8d81b4ed2daca2f0cca1fd4063cc47a1bd942d3be86",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-649841",
	                        "3ebdc406ea00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-649841 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-312375 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ ssh     │ -p cilium-312375 sudo crio config                                                                                                                                                                                                             │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p cilium-312375                                                                                                                                                                                                                              │ cilium-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-365628    │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p force-systemd-flag-670413 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ pause   │ -p pause-918853 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p pause-918853                                                                                                                                                                                                                               │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-options-418869 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p missing-upgrade-123936                                                                                                                                                                                                                     │ missing-upgrade-123936    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ force-systemd-flag-670413 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p force-systemd-flag-670413                                                                                                                                                                                                                  │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ cert-options-418869 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ -p cert-options-418869 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p cert-options-418869                                                                                                                                                                                                                        │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:40 UTC │
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:40:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:40:27.475458  243047 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:40:27.475708  243047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:40:27.475716  243047 out.go:374] Setting ErrFile to fd 2...
	I1020 12:40:27.475720  243047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:40:27.475908  243047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:40:27.476353  243047 out.go:368] Setting JSON to false
	I1020 12:40:27.477523  243047 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4976,"bootTime":1760959051,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:40:27.477615  243047 start.go:141] virtualization: kvm guest
	I1020 12:40:27.479809  243047 out.go:179] * [old-k8s-version-384253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:40:27.481402  243047 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:40:27.481399  243047 notify.go:220] Checking for updates...
	I1020 12:40:27.484417  243047 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:40:27.485929  243047 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:27.487648  243047 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:40:27.489179  243047 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:40:27.490524  243047 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:40:27.492567  243047 config.go:182] Loaded profile config "old-k8s-version-384253": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 12:40:27.494575  243047 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1020 12:40:27.496126  243047 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:40:27.523293  243047 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:40:27.523440  243047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:40:27.584237  243047 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:40:27.573233638 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:40:27.584346  243047 docker.go:318] overlay module found
	I1020 12:40:27.586421  243047 out.go:179] * Using the docker driver based on existing profile
	I1020 12:40:27.587833  243047 start.go:305] selected driver: docker
	I1020 12:40:27.587847  243047 start.go:925] validating driver "docker" against &{Name:old-k8s-version-384253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-384253 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:27.587942  243047 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:40:27.588455  243047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:40:27.646863  243047 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:40:27.637371967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:40:27.647159  243047 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:40:27.647184  243047 cni.go:84] Creating CNI manager for ""
	I1020 12:40:27.647228  243047 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:40:27.647279  243047 start.go:349] cluster config:
	{Name:old-k8s-version-384253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-384253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:27.649293  243047 out.go:179] * Starting "old-k8s-version-384253" primary control-plane node in "old-k8s-version-384253" cluster
	I1020 12:40:27.650673  243047 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:40:27.652237  243047 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:40:27.653371  243047 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 12:40:27.653412  243047 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1020 12:40:27.653421  243047 cache.go:58] Caching tarball of preloaded images
	I1020 12:40:27.653482  243047 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:40:27.653501  243047 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:40:27.653512  243047 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1020 12:40:27.653618  243047 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/old-k8s-version-384253/config.json ...
	I1020 12:40:27.674274  243047 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:40:27.674296  243047 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:40:27.674311  243047 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:40:27.674332  243047 start.go:360] acquireMachinesLock for old-k8s-version-384253: {Name:mk06f9e2daf6abca4fe4980cf9ba903ad66045d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:27.674390  243047 start.go:364] duration metric: took 36.476µs to acquireMachinesLock for "old-k8s-version-384253"
	I1020 12:40:27.674406  243047 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:40:27.674412  243047 fix.go:54] fixHost starting: 
	I1020 12:40:27.674600  243047 cli_runner.go:164] Run: docker container inspect old-k8s-version-384253 --format={{.State.Status}}
	I1020 12:40:27.692316  243047 fix.go:112] recreateIfNeeded on old-k8s-version-384253: state=Stopped err=<nil>
	W1020 12:40:27.692344  243047 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:40:27.470876  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 12:40:27.470931  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 20 12:40:20 no-preload-649841 crio[773]: time="2025-10-20T12:40:20.74956607Z" level=info msg="Starting container: fcbe4c0ad23aa77dee9d5098432ac88eca674eb5bde48650da62c750f328c25e" id=99ee86cc-63e6-484e-906b-53d5b79c9262 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:40:20 no-preload-649841 crio[773]: time="2025-10-20T12:40:20.751298295Z" level=info msg="Started container" PID=2906 containerID=fcbe4c0ad23aa77dee9d5098432ac88eca674eb5bde48650da62c750f328c25e description=kube-system/coredns-66bc5c9577-7d88p/coredns id=99ee86cc-63e6-484e-906b-53d5b79c9262 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b69892cd92d1e7a8450b15e44e8243a010fa3928de8784f92c0f4e2b2622aa97
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.31397677Z" level=info msg="Running pod sandbox: default/busybox/POD" id=67865d4f-647f-477b-8431-de1a86600e0a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.314060054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.318821141Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4ea84eaa7eafbe62179f497f426820ef4aec8ad5c96485594e07eca38efd909c UID:45dbbb45-578b-4f3e-a055-b8e545812159 NetNS:/var/run/netns/1d858003-fda8-4d2a-82c8-da813314af5a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005189b8}] Aliases:map[]}"
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.31885381Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.328916815Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4ea84eaa7eafbe62179f497f426820ef4aec8ad5c96485594e07eca38efd909c UID:45dbbb45-578b-4f3e-a055-b8e545812159 NetNS:/var/run/netns/1d858003-fda8-4d2a-82c8-da813314af5a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005189b8}] Aliases:map[]}"
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.329051723Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.329799922Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.330522455Z" level=info msg="Ran pod sandbox 4ea84eaa7eafbe62179f497f426820ef4aec8ad5c96485594e07eca38efd909c with infra container: default/busybox/POD" id=67865d4f-647f-477b-8431-de1a86600e0a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.331816559Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0177d37-1e7f-48da-9e3e-1b181c5789fa name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.331945841Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a0177d37-1e7f-48da-9e3e-1b181c5789fa name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.331982755Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a0177d37-1e7f-48da-9e3e-1b181c5789fa name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.33253265Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9612ac93-8aee-4533-be3b-a26a743a87c4 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:40:23 no-preload-649841 crio[773]: time="2025-10-20T12:40:23.333897875Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.755244231Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9612ac93-8aee-4533-be3b-a26a743a87c4 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.755885227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ba0d6cb3-7aaa-498d-bed6-150c1bff4b69 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.757221038Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b34615f-4d49-49ae-8043-73719f1f261c name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.760967078Z" level=info msg="Creating container: default/busybox/busybox" id=b64504fa-d7dc-468f-bffc-03c14109809a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.761135057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.764802409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.765254377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.794543013Z" level=info msg="Created container bc6faa9e0c65ec81b56b079e84ffb3c3bfe3e43f75cbaf358a968a44ed838dc0: default/busybox/busybox" id=b64504fa-d7dc-468f-bffc-03c14109809a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.795187795Z" level=info msg="Starting container: bc6faa9e0c65ec81b56b079e84ffb3c3bfe3e43f75cbaf358a968a44ed838dc0" id=19644397-04d3-49fc-b28d-ca2ad7c02c71 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:40:24 no-preload-649841 crio[773]: time="2025-10-20T12:40:24.79715239Z" level=info msg="Started container" PID=2979 containerID=bc6faa9e0c65ec81b56b079e84ffb3c3bfe3e43f75cbaf358a968a44ed838dc0 description=default/busybox/busybox id=19644397-04d3-49fc-b28d-ca2ad7c02c71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ea84eaa7eafbe62179f497f426820ef4aec8ad5c96485594e07eca38efd909c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc6faa9e0c65e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   4ea84eaa7eafb       busybox                                     default
	fcbe4c0ad23aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   b69892cd92d1e       coredns-66bc5c9577-7d88p                    kube-system
	df11f9447374c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   2b337cc761a95       storage-provisioner                         kube-system
	7ed0c6ea89da3       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   8dd49005c5b05       kindnet-ghtcz                               kube-system
	8f685635acb24       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   15ab47975222d       kube-proxy-6vpwz                            kube-system
	43cb6c491a2b7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   2887406861e31       kube-scheduler-no-preload-649841            kube-system
	08b812f170544       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   593facde54ab2       kube-controller-manager-no-preload-649841   kube-system
	1a3c841145ebd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   db0be62a9150c       kube-apiserver-no-preload-649841            kube-system
	c27a315fca488       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   9fd9f3225d266       etcd-no-preload-649841                      kube-system
	
	
	==> coredns [fcbe4c0ad23aa77dee9d5098432ac88eca674eb5bde48650da62c750f328c25e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54065 - 34986 "HINFO IN 7668817121609422936.5562802085265291954. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.42115211s
	
	
	==> describe nodes <==
	Name:               no-preload-649841
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-649841
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=no-preload-649841
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_40_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:39:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-649841
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:40:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:40:20 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:40:20 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:40:20 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:40:20 +0000   Mon, 20 Oct 2025 12:40:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-649841
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                433a6564-548d-4f1d-8a4a-223c020110ee
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-7d88p                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-649841                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-ghtcz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-649841             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-649841    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-6vpwz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-649841             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-649841 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-649841 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-649841 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-649841 event: Registered Node no-preload-649841 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-649841 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [c27a315fca48841ab6f62fc6ebec770d67ba74ef8db05c0f810838b326799afb] <==
	{"level":"warn","ts":"2025-10-20T12:39:59.119524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.125476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.131365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.137850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.146353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.152759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.159525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.169128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.181351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.187370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.193500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.199901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.205803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.213034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.219608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.225259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.231268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.245136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.258815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.265152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.271079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.283765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.290447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.296361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:39:59.345764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:40:32 up  1:23,  0 user,  load average: 4.12, 3.60, 2.06
	Linux no-preload-649841 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ed0c6ea89da329297c8d97f6a52370fe02e32b704fe1a09d79a394cbf6b70de] <==
	I1020 12:40:09.523939       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:40:09.524258       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:40:09.524418       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:40:09.524433       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:40:09.524458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:40:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:40:09.818677       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:40:09.818756       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:40:09.818792       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:40:09.818958       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:40:10.218921       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:40:10.218950       1 metrics.go:72] Registering metrics
	I1020 12:40:10.219026       1 controller.go:711] "Syncing nftables rules"
	I1020 12:40:19.822423       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:40:19.822496       1 main.go:301] handling current node
	I1020 12:40:29.822240       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:40:29.822282       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a3c841145ebd8886ea662bbfc4840a4702537de9b592523dbc14b6d74f9039f] <==
	I1020 12:39:59.806295       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 12:39:59.807757       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:39:59.811496       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:39:59.811738       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 12:39:59.816858       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:39:59.816921       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:39:59.829195       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:40:00.709968       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:40:00.714637       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:40:00.714655       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:40:01.168927       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:40:01.205561       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:40:01.315005       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:40:01.321006       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1020 12:40:01.321976       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:40:01.326252       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:40:01.737150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:40:02.208422       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:40:02.217457       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:40:02.224927       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:40:07.490435       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1020 12:40:07.693547       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:40:07.697354       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:40:07.839727       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1020 12:40:31.099137       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:43884: use of closed network connection
	
	
	==> kube-controller-manager [08b812f170544f756487b373d41a6af7397a37b332c832f38fb5c444a6cd4a09] <==
	I1020 12:40:06.736117       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 12:40:06.736451       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:40:06.736604       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:40:06.736730       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 12:40:06.737007       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:40:06.737112       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:40:06.737143       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 12:40:06.737382       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:40:06.737625       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:40:06.737699       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:40:06.738515       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:40:06.738542       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:40:06.738571       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:40:06.739686       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:40:06.739724       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:40:06.740909       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:40:06.742154       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:40:06.742264       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:40:06.746686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:40:06.753033       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 12:40:06.762313       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:40:06.764494       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:40:06.764509       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:40:06.764518       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:40:21.688253       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f685635acb246573afa7f20765b74ea4f8a4080e94b090fec6a29f4b92a41e7] <==
	I1020 12:40:07.905837       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:40:07.960872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:40:08.061379       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:40:08.061426       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:40:08.061500       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:40:08.083297       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:40:08.083353       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:40:08.088829       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:40:08.089308       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:40:08.089350       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:08.090560       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:40:08.090583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:40:08.090637       1 config.go:200] "Starting service config controller"
	I1020 12:40:08.090647       1 config.go:309] "Starting node config controller"
	I1020 12:40:08.090654       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:40:08.090658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:40:08.090704       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:40:08.090758       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:40:08.191076       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:40:08.191537       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:40:08.191676       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:40:08.191710       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [43cb6c491a2b791a86115b402a2adef60596654a42c11cfabe66d10b7a508551] <==
	E1020 12:39:59.755206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:39:59.755230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:39:59.755374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:39:59.755404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:39:59.755503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:39:59.755523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:39:59.755556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:39:59.755585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:39:59.755614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:39:59.755617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:39:59.755680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:39:59.755757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:39:59.755795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:40:00.611008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:40:00.681544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:40:00.693866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:40:00.712000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:40:00.736009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:40:00.751402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:40:00.769755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:40:00.836874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:40:00.905310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:40:00.934666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:40:00.999228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1020 12:40:01.251446       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:40:03 no-preload-649841 kubelet[2285]: I1020 12:40:03.119403    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-649841" podStartSLOduration=1.119384991 podStartE2EDuration="1.119384991s" podCreationTimestamp="2025-10-20 12:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:03.108558694 +0000 UTC m=+1.126415978" watchObservedRunningTime="2025-10-20 12:40:03.119384991 +0000 UTC m=+1.137242275"
	Oct 20 12:40:03 no-preload-649841 kubelet[2285]: I1020 12:40:03.130985    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-649841" podStartSLOduration=1.130966621 podStartE2EDuration="1.130966621s" podCreationTimestamp="2025-10-20 12:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:03.119590995 +0000 UTC m=+1.137448272" watchObservedRunningTime="2025-10-20 12:40:03.130966621 +0000 UTC m=+1.148823905"
	Oct 20 12:40:03 no-preload-649841 kubelet[2285]: I1020 12:40:03.140754    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-649841" podStartSLOduration=2.140735036 podStartE2EDuration="2.140735036s" podCreationTimestamp="2025-10-20 12:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:03.131169695 +0000 UTC m=+1.149026970" watchObservedRunningTime="2025-10-20 12:40:03.140735036 +0000 UTC m=+1.158592320"
	Oct 20 12:40:03 no-preload-649841 kubelet[2285]: I1020 12:40:03.149139    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-649841" podStartSLOduration=1.149122707 podStartE2EDuration="1.149122707s" podCreationTimestamp="2025-10-20 12:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:03.14092988 +0000 UTC m=+1.158787160" watchObservedRunningTime="2025-10-20 12:40:03.149122707 +0000 UTC m=+1.166980058"
	Oct 20 12:40:06 no-preload-649841 kubelet[2285]: I1020 12:40:06.792656    2285 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 20 12:40:06 no-preload-649841 kubelet[2285]: I1020 12:40:06.793354    2285 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590184    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c057504d-908d-4f7f-995b-0524392b82ff-xtables-lock\") pod \"kindnet-ghtcz\" (UID: \"c057504d-908d-4f7f-995b-0524392b82ff\") " pod="kube-system/kindnet-ghtcz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590239    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c057504d-908d-4f7f-995b-0524392b82ff-lib-modules\") pod \"kindnet-ghtcz\" (UID: \"c057504d-908d-4f7f-995b-0524392b82ff\") " pod="kube-system/kindnet-ghtcz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590267    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ef821cc-1bf1-4ded-8a94-d320d898c160-kube-proxy\") pod \"kube-proxy-6vpwz\" (UID: \"6ef821cc-1bf1-4ded-8a94-d320d898c160\") " pod="kube-system/kube-proxy-6vpwz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590336    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ef821cc-1bf1-4ded-8a94-d320d898c160-xtables-lock\") pod \"kube-proxy-6vpwz\" (UID: \"6ef821cc-1bf1-4ded-8a94-d320d898c160\") " pod="kube-system/kube-proxy-6vpwz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590384    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkltn\" (UniqueName: \"kubernetes.io/projected/c057504d-908d-4f7f-995b-0524392b82ff-kube-api-access-rkltn\") pod \"kindnet-ghtcz\" (UID: \"c057504d-908d-4f7f-995b-0524392b82ff\") " pod="kube-system/kindnet-ghtcz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590413    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbf5\" (UniqueName: \"kubernetes.io/projected/6ef821cc-1bf1-4ded-8a94-d320d898c160-kube-api-access-zgbf5\") pod \"kube-proxy-6vpwz\" (UID: \"6ef821cc-1bf1-4ded-8a94-d320d898c160\") " pod="kube-system/kube-proxy-6vpwz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590442    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c057504d-908d-4f7f-995b-0524392b82ff-cni-cfg\") pod \"kindnet-ghtcz\" (UID: \"c057504d-908d-4f7f-995b-0524392b82ff\") " pod="kube-system/kindnet-ghtcz"
	Oct 20 12:40:07 no-preload-649841 kubelet[2285]: I1020 12:40:07.590455    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ef821cc-1bf1-4ded-8a94-d320d898c160-lib-modules\") pod \"kube-proxy-6vpwz\" (UID: \"6ef821cc-1bf1-4ded-8a94-d320d898c160\") " pod="kube-system/kube-proxy-6vpwz"
	Oct 20 12:40:08 no-preload-649841 kubelet[2285]: I1020 12:40:08.108320    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6vpwz" podStartSLOduration=1.1083013369999999 podStartE2EDuration="1.108301337s" podCreationTimestamp="2025-10-20 12:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:08.10820815 +0000 UTC m=+6.126065437" watchObservedRunningTime="2025-10-20 12:40:08.108301337 +0000 UTC m=+6.126158622"
	Oct 20 12:40:10 no-preload-649841 kubelet[2285]: I1020 12:40:10.115855    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ghtcz" podStartSLOduration=1.622113133 podStartE2EDuration="3.115836611s" podCreationTimestamp="2025-10-20 12:40:07 +0000 UTC" firstStartedPulling="2025-10-20 12:40:07.819848818 +0000 UTC m=+5.837706093" lastFinishedPulling="2025-10-20 12:40:09.313572307 +0000 UTC m=+7.331429571" observedRunningTime="2025-10-20 12:40:10.115621814 +0000 UTC m=+8.133479107" watchObservedRunningTime="2025-10-20 12:40:10.115836611 +0000 UTC m=+8.133693894"
	Oct 20 12:40:20 no-preload-649841 kubelet[2285]: I1020 12:40:20.370649    2285 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 20 12:40:20 no-preload-649841 kubelet[2285]: I1020 12:40:20.486831    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c859d9e-5016-485a-adc3-b33089248f2f-config-volume\") pod \"coredns-66bc5c9577-7d88p\" (UID: \"6c859d9e-5016-485a-adc3-b33089248f2f\") " pod="kube-system/coredns-66bc5c9577-7d88p"
	Oct 20 12:40:20 no-preload-649841 kubelet[2285]: I1020 12:40:20.486878    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7ee83276-3c65-4f28-88df-db5aca9ab40b-tmp\") pod \"storage-provisioner\" (UID: \"7ee83276-3c65-4f28-88df-db5aca9ab40b\") " pod="kube-system/storage-provisioner"
	Oct 20 12:40:20 no-preload-649841 kubelet[2285]: I1020 12:40:20.486893    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqm64\" (UniqueName: \"kubernetes.io/projected/7ee83276-3c65-4f28-88df-db5aca9ab40b-kube-api-access-rqm64\") pod \"storage-provisioner\" (UID: \"7ee83276-3c65-4f28-88df-db5aca9ab40b\") " pod="kube-system/storage-provisioner"
	Oct 20 12:40:20 no-preload-649841 kubelet[2285]: I1020 12:40:20.486914    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7dw\" (UniqueName: \"kubernetes.io/projected/6c859d9e-5016-485a-adc3-b33089248f2f-kube-api-access-cb7dw\") pod \"coredns-66bc5c9577-7d88p\" (UID: \"6c859d9e-5016-485a-adc3-b33089248f2f\") " pod="kube-system/coredns-66bc5c9577-7d88p"
	Oct 20 12:40:21 no-preload-649841 kubelet[2285]: I1020 12:40:21.138950    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.13893299 podStartE2EDuration="13.13893299s" podCreationTimestamp="2025-10-20 12:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:21.138744398 +0000 UTC m=+19.156601664" watchObservedRunningTime="2025-10-20 12:40:21.13893299 +0000 UTC m=+19.156790276"
	Oct 20 12:40:21 no-preload-649841 kubelet[2285]: I1020 12:40:21.148984    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7d88p" podStartSLOduration=14.148965822 podStartE2EDuration="14.148965822s" podCreationTimestamp="2025-10-20 12:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:40:21.1488652 +0000 UTC m=+19.166722485" watchObservedRunningTime="2025-10-20 12:40:21.148965822 +0000 UTC m=+19.166823105"
	Oct 20 12:40:23 no-preload-649841 kubelet[2285]: I1020 12:40:23.102367    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8c2z\" (UniqueName: \"kubernetes.io/projected/45dbbb45-578b-4f3e-a055-b8e545812159-kube-api-access-q8c2z\") pod \"busybox\" (UID: \"45dbbb45-578b-4f3e-a055-b8e545812159\") " pod="default/busybox"
	Oct 20 12:40:25 no-preload-649841 kubelet[2285]: I1020 12:40:25.149731    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.725180377 podStartE2EDuration="2.149709736s" podCreationTimestamp="2025-10-20 12:40:23 +0000 UTC" firstStartedPulling="2025-10-20 12:40:23.33217511 +0000 UTC m=+21.350032376" lastFinishedPulling="2025-10-20 12:40:24.756704473 +0000 UTC m=+22.774561735" observedRunningTime="2025-10-20 12:40:25.14960194 +0000 UTC m=+23.167459223" watchObservedRunningTime="2025-10-20 12:40:25.149709736 +0000 UTC m=+23.167567010"
	
	
	==> storage-provisioner [df11f9447374c200118003715d20aa2e3a41f4be5ace204efd03394f19c94351] <==
	I1020 12:40:20.757322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:40:20.765399       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:40:20.765436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:40:20.767338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:20.771986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:40:20.772234       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:40:20.772375       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4fe8ee9-f82c-4cee-82a6-30314a2d696f", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-649841_b4a39a50-b6d0-4be4-ae94-3c294b25c46c became leader
	I1020 12:40:20.772568       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-649841_b4a39a50-b6d0-4be4-ae94-3c294b25c46c!
	W1020 12:40:20.774621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:20.778985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:40:20.873578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-649841_b4a39a50-b6d0-4be4-ae94-3c294b25c46c!
	W1020 12:40:22.781703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:22.785624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:24.789339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:24.794636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:26.797255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:26.801249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:28.804596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:28.809905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:30.812821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:40:30.816220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-649841 -n no-preload-649841
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-649841 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-384253 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-384253 --alsologtostderr -v=1: exit status 80 (2.274683785s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-384253 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:41:23.917563  250275 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:41:23.917861  250275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:23.917872  250275 out.go:374] Setting ErrFile to fd 2...
	I1020 12:41:23.917877  250275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:23.918111  250275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:41:23.918383  250275 out.go:368] Setting JSON to false
	I1020 12:41:23.918436  250275 mustload.go:65] Loading cluster: old-k8s-version-384253
	I1020 12:41:23.918855  250275 config.go:182] Loaded profile config "old-k8s-version-384253": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 12:41:23.919270  250275 cli_runner.go:164] Run: docker container inspect old-k8s-version-384253 --format={{.State.Status}}
	I1020 12:41:23.939344  250275 host.go:66] Checking if "old-k8s-version-384253" exists ...
	I1020 12:41:23.939637  250275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:41:24.002487  250275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-20 12:41:23.991649074 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:41:24.003137  250275 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-384253 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 12:41:24.005222  250275 out.go:179] * Pausing node old-k8s-version-384253 ... 
	I1020 12:41:24.007590  250275 host.go:66] Checking if "old-k8s-version-384253" exists ...
	I1020 12:41:24.007940  250275 ssh_runner.go:195] Run: systemctl --version
	I1020 12:41:24.007990  250275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-384253
	I1020 12:41:24.027623  250275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/old-k8s-version-384253/id_rsa Username:docker}
	I1020 12:41:24.127663  250275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:24.141002  250275 pause.go:52] kubelet running: true
	I1020 12:41:24.141075  250275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:24.307667  250275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:24.307746  250275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:24.374602  250275 cri.go:89] found id: "bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8"
	I1020 12:41:24.374628  250275 cri.go:89] found id: "a9c9157678ee8818b6613789a87ebd56bf24f6bce34399e3307522241d499bf8"
	I1020 12:41:24.374632  250275 cri.go:89] found id: "619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9"
	I1020 12:41:24.374636  250275 cri.go:89] found id: "e1aadd87abcbc99c03699210c5ae4f8e8e1782905fba250d326b688cbbd48f15"
	I1020 12:41:24.374638  250275 cri.go:89] found id: "81f8635595c355db5ae5a00afb41d8dd5cb7bff59c4bdad7af60c092966dab72"
	I1020 12:41:24.374641  250275 cri.go:89] found id: "5e481e30b8ec40735fa2f558bf9dd408ddb9a893973ee6253a8f9996d7dde47c"
	I1020 12:41:24.374643  250275 cri.go:89] found id: "bc8f02baa8770ba6721a99030f25088261d2c0cd3db222046296ba97c0e0d54e"
	I1020 12:41:24.374646  250275 cri.go:89] found id: "e1cc7b6a003edbbb90ecfe2f4ca699c5caa7bc9e2e4aab94b226caa3576d4308"
	I1020 12:41:24.374648  250275 cri.go:89] found id: "f6c082ba3c5bb39c9c14011daf9f0b91a04643d84063cc518b4449099b0fd75e"
	I1020 12:41:24.374654  250275 cri.go:89] found id: "11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	I1020 12:41:24.374657  250275 cri.go:89] found id: "332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026"
	I1020 12:41:24.374659  250275 cri.go:89] found id: ""
	I1020 12:41:24.374697  250275 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:24.386760  250275 retry.go:31] will retry after 251.753393ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:24Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:41:24.639328  250275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:24.652825  250275 pause.go:52] kubelet running: false
	I1020 12:41:24.652876  250275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:24.796404  250275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:24.796484  250275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:24.864380  250275 cri.go:89] found id: "bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8"
	I1020 12:41:24.864398  250275 cri.go:89] found id: "a9c9157678ee8818b6613789a87ebd56bf24f6bce34399e3307522241d499bf8"
	I1020 12:41:24.864402  250275 cri.go:89] found id: "619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9"
	I1020 12:41:24.864405  250275 cri.go:89] found id: "e1aadd87abcbc99c03699210c5ae4f8e8e1782905fba250d326b688cbbd48f15"
	I1020 12:41:24.864408  250275 cri.go:89] found id: "81f8635595c355db5ae5a00afb41d8dd5cb7bff59c4bdad7af60c092966dab72"
	I1020 12:41:24.864412  250275 cri.go:89] found id: "5e481e30b8ec40735fa2f558bf9dd408ddb9a893973ee6253a8f9996d7dde47c"
	I1020 12:41:24.864414  250275 cri.go:89] found id: "bc8f02baa8770ba6721a99030f25088261d2c0cd3db222046296ba97c0e0d54e"
	I1020 12:41:24.864417  250275 cri.go:89] found id: "e1cc7b6a003edbbb90ecfe2f4ca699c5caa7bc9e2e4aab94b226caa3576d4308"
	I1020 12:41:24.864419  250275 cri.go:89] found id: "f6c082ba3c5bb39c9c14011daf9f0b91a04643d84063cc518b4449099b0fd75e"
	I1020 12:41:24.864434  250275 cri.go:89] found id: "11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	I1020 12:41:24.864437  250275 cri.go:89] found id: "332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026"
	I1020 12:41:24.864450  250275 cri.go:89] found id: ""
	I1020 12:41:24.864489  250275 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:24.877553  250275 retry.go:31] will retry after 341.319085ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:24Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:41:25.219967  250275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:25.233558  250275 pause.go:52] kubelet running: false
	I1020 12:41:25.233619  250275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:25.376879  250275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:25.376964  250275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:25.444762  250275 cri.go:89] found id: "bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8"
	I1020 12:41:25.444801  250275 cri.go:89] found id: "a9c9157678ee8818b6613789a87ebd56bf24f6bce34399e3307522241d499bf8"
	I1020 12:41:25.444808  250275 cri.go:89] found id: "619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9"
	I1020 12:41:25.444812  250275 cri.go:89] found id: "e1aadd87abcbc99c03699210c5ae4f8e8e1782905fba250d326b688cbbd48f15"
	I1020 12:41:25.444817  250275 cri.go:89] found id: "81f8635595c355db5ae5a00afb41d8dd5cb7bff59c4bdad7af60c092966dab72"
	I1020 12:41:25.444821  250275 cri.go:89] found id: "5e481e30b8ec40735fa2f558bf9dd408ddb9a893973ee6253a8f9996d7dde47c"
	I1020 12:41:25.444824  250275 cri.go:89] found id: "bc8f02baa8770ba6721a99030f25088261d2c0cd3db222046296ba97c0e0d54e"
	I1020 12:41:25.444827  250275 cri.go:89] found id: "e1cc7b6a003edbbb90ecfe2f4ca699c5caa7bc9e2e4aab94b226caa3576d4308"
	I1020 12:41:25.444831  250275 cri.go:89] found id: "f6c082ba3c5bb39c9c14011daf9f0b91a04643d84063cc518b4449099b0fd75e"
	I1020 12:41:25.444839  250275 cri.go:89] found id: "11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	I1020 12:41:25.444844  250275 cri.go:89] found id: "332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026"
	I1020 12:41:25.444848  250275 cri.go:89] found id: ""
	I1020 12:41:25.444891  250275 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:25.457016  250275 retry.go:31] will retry after 424.265587ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:25Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:41:25.881495  250275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:25.895590  250275 pause.go:52] kubelet running: false
	I1020 12:41:25.895647  250275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:26.052190  250275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:26.052304  250275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:26.120032  250275 cri.go:89] found id: "bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8"
	I1020 12:41:26.120059  250275 cri.go:89] found id: "a9c9157678ee8818b6613789a87ebd56bf24f6bce34399e3307522241d499bf8"
	I1020 12:41:26.120066  250275 cri.go:89] found id: "619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9"
	I1020 12:41:26.120072  250275 cri.go:89] found id: "e1aadd87abcbc99c03699210c5ae4f8e8e1782905fba250d326b688cbbd48f15"
	I1020 12:41:26.120077  250275 cri.go:89] found id: "81f8635595c355db5ae5a00afb41d8dd5cb7bff59c4bdad7af60c092966dab72"
	I1020 12:41:26.120082  250275 cri.go:89] found id: "5e481e30b8ec40735fa2f558bf9dd408ddb9a893973ee6253a8f9996d7dde47c"
	I1020 12:41:26.120086  250275 cri.go:89] found id: "bc8f02baa8770ba6721a99030f25088261d2c0cd3db222046296ba97c0e0d54e"
	I1020 12:41:26.120090  250275 cri.go:89] found id: "e1cc7b6a003edbbb90ecfe2f4ca699c5caa7bc9e2e4aab94b226caa3576d4308"
	I1020 12:41:26.120094  250275 cri.go:89] found id: "f6c082ba3c5bb39c9c14011daf9f0b91a04643d84063cc518b4449099b0fd75e"
	I1020 12:41:26.120113  250275 cri.go:89] found id: "11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	I1020 12:41:26.120117  250275 cri.go:89] found id: "332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026"
	I1020 12:41:26.120121  250275 cri.go:89] found id: ""
	I1020 12:41:26.120163  250275 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:26.135309  250275 out.go:203] 
	W1020 12:41:26.136622  250275 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:41:26.136653  250275 out.go:285] * 
	* 
	W1020 12:41:26.140661  250275 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:41:26.142370  250275 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-384253 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-384253
helpers_test.go:243: (dbg) docker inspect old-k8s-version-384253:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3",
	        "Created": "2025-10-20T12:39:15.199417657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:40:27.72062841Z",
	            "FinishedAt": "2025-10-20T12:40:26.877008283Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/hosts",
	        "LogPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3-json.log",
	        "Name": "/old-k8s-version-384253",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-384253:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-384253",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3",
	                "LowerDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-384253",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-384253/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-384253",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-384253",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-384253",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e9976ab8b92ca56f3b6c8d967444c1100008a657aa66c0505e319f362d51cd2",
	            "SandboxKey": "/var/run/docker/netns/1e9976ab8b92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-384253": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:74:66:9d:d2:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "297cbf1591dbfc42eff4519f7180072339a2b6c16821ef2400eadb774f669261",
	                    "EndpointID": "a35726fce365fe0d15ada0107d4feefb2c8606c3d3165ef6642f7f27bcf2857a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-384253",
	                        "42a1b3150f06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253: exit status 2 (315.924693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384253 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-384253 logs -n 25: (1.116810049s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ pause   │ -p pause-918853 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p pause-918853                                                                                                                                                                                                                               │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-options-418869 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p missing-upgrade-123936                                                                                                                                                                                                                     │ missing-upgrade-123936    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ force-systemd-flag-670413 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p force-systemd-flag-670413                                                                                                                                                                                                                  │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ cert-options-418869 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ -p cert-options-418869 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p cert-options-418869                                                                                                                                                                                                                        │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:40 UTC │
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:40:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:40:49.636056  246403 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:40:49.636325  246403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:40:49.636354  246403 out.go:374] Setting ErrFile to fd 2...
	I1020 12:40:49.636360  246403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:40:49.636535  246403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:40:49.637053  246403 out.go:368] Setting JSON to false
	I1020 12:40:49.638246  246403 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4999,"bootTime":1760959051,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:40:49.638344  246403 start.go:141] virtualization: kvm guest
	I1020 12:40:49.640434  246403 out.go:179] * [no-preload-649841] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:40:49.642414  246403 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:40:49.642418  246403 notify.go:220] Checking for updates...
	I1020 12:40:49.645427  246403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:40:49.647167  246403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:49.648592  246403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:40:49.649884  246403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:40:49.651129  246403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:40:49.653960  246403 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:49.654462  246403 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:40:49.680382  246403 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:40:49.680467  246403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:40:49.752170  246403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:40:49.737545471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:40:49.752328  246403 docker.go:318] overlay module found
	I1020 12:40:49.754839  246403 out.go:179] * Using the docker driver based on existing profile
	I1020 12:40:49.755964  246403 start.go:305] selected driver: docker
	I1020 12:40:49.755981  246403 start.go:925] validating driver "docker" against &{Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:49.756091  246403 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:40:49.756815  246403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:40:49.850275  246403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:40:49.835747759 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:40:49.850704  246403 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:40:49.850752  246403 cni.go:84] Creating CNI manager for ""
	I1020 12:40:49.850829  246403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:40:49.850881  246403 start.go:349] cluster config:
	{Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:49.854333  246403 out.go:179] * Starting "no-preload-649841" primary control-plane node in "no-preload-649841" cluster
	I1020 12:40:49.855499  246403 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:40:49.856923  246403 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:40:49.858535  246403 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:40:49.858660  246403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:40:49.858688  246403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/config.json ...
	I1020 12:40:49.858906  246403 cache.go:107] acquiring lock: {Name:mkaa1533143dfeaa0b848a90ee060b0f610ddc81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.858938  246403 cache.go:107] acquiring lock: {Name:mkb88d6f234305026db9fdbd31e5610d523894ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.858961  246403 cache.go:107] acquiring lock: {Name:mkbd2ddf92e86ddfea2601bdc03463773cf73f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.858986  246403 cache.go:107] acquiring lock: {Name:mkf9ce8a4aa3144f3c913fcb0e60bd670d0bc742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859011  246403 cache.go:107] acquiring lock: {Name:mk5e32941f5c3db81b90197cda93fec283cdb548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859050  246403 cache.go:107] acquiring lock: {Name:mkd5c8fee23b3a6854f408e7a08b0b2884a76ec5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859079  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 12:40:49.859085  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 12:40:49.859097  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 12:40:49.859092  246403 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 181.625µs
	I1020 12:40:49.859105  246403 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 122.665µs
	I1020 12:40:49.859111  246403 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 62.689µs
	I1020 12:40:49.859118  246403 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 12:40:49.859120  246403 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 12:40:49.859040  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1020 12:40:49.859135  246403 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 250.791µs
	I1020 12:40:49.859143  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 12:40:49.859156  246403 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 240.76µs
	I1020 12:40:49.859165  246403 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 12:40:49.859095  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1020 12:40:49.859175  246403 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 168.148µs
	I1020 12:40:49.859121  246403 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 12:40:49.859145  246403 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1020 12:40:49.859182  246403 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 12:40:49.859028  246403 cache.go:107] acquiring lock: {Name:mk1adfad6c98d9549a5f634b54404b0984d1f237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859230  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 12:40:49.859237  246403 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 213.811µs
	I1020 12:40:49.859246  246403 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 12:40:49.859022  246403 cache.go:107] acquiring lock: {Name:mk1c3d7b49aa5031d19fe1d56ce36f186e653c93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859274  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 12:40:49.859282  246403 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 269.899µs
	I1020 12:40:49.859297  246403 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 12:40:49.859321  246403 cache.go:87] Successfully saved all images to host disk.
	I1020 12:40:49.891468  246403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:40:49.891492  246403 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:40:49.891512  246403 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:40:49.891543  246403 start.go:360] acquireMachinesLock for no-preload-649841: {Name:mke74c98c770c485912453347459850ab361dd04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.891611  246403 start.go:364] duration metric: took 44.39µs to acquireMachinesLock for "no-preload-649841"
	I1020 12:40:49.891635  246403 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:40:49.891641  246403 fix.go:54] fixHost starting: 
	I1020 12:40:49.891944  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:49.918616  246403 fix.go:112] recreateIfNeeded on no-preload-649841: state=Stopped err=<nil>
	W1020 12:40:49.918651  246403 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:40:46.468001  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:46.468408  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:46.967751  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:46.968170  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:47.467835  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:47.468286  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:47.967757  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:47.968216  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:48.467850  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:48.468268  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:48.968023  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:48.968438  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:49.467857  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:49.468219  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:49.967785  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:49.968376  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:50.467855  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:50.468285  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:50.968625  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:50.969057  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	W1020 12:40:48.784897  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:40:50.785257  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:40:49.921997  246403 out.go:252] * Restarting existing docker container for "no-preload-649841" ...
	I1020 12:40:49.922104  246403 cli_runner.go:164] Run: docker start no-preload-649841
	I1020 12:40:50.242092  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:50.268044  246403 kic.go:430] container "no-preload-649841" state is running.
	I1020 12:40:50.268495  246403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:40:50.294690  246403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/config.json ...
	I1020 12:40:50.294973  246403 machine.go:93] provisionDockerMachine start ...
	I1020 12:40:50.295081  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:50.319467  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:50.319818  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:50.319835  246403 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:40:50.320442  246403 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47736->127.0.0.1:33068: read: connection reset by peer
	I1020 12:40:53.480861  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-649841
	
	I1020 12:40:53.480890  246403 ubuntu.go:182] provisioning hostname "no-preload-649841"
	I1020 12:40:53.480951  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:53.504982  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:53.505287  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:53.505304  246403 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-649841 && echo "no-preload-649841" | sudo tee /etc/hostname
	I1020 12:40:53.675481  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-649841
	
	I1020 12:40:53.675587  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:53.700306  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:53.700594  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:53.700621  246403 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-649841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-649841/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-649841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:40:53.858701  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:40:53.858735  246403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:40:53.858762  246403 ubuntu.go:190] setting up certificates
	I1020 12:40:53.858788  246403 provision.go:84] configureAuth start
	I1020 12:40:53.858859  246403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:40:53.882618  246403 provision.go:143] copyHostCerts
	I1020 12:40:53.882687  246403 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:40:53.882707  246403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:40:53.882825  246403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:40:53.882971  246403 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:40:53.882983  246403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:40:53.883048  246403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:40:53.883151  246403 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:40:53.883164  246403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:40:53.883201  246403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:40:53.883312  246403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.no-preload-649841 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-649841]
	I1020 12:40:54.281887  246403 provision.go:177] copyRemoteCerts
	I1020 12:40:54.281955  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:40:54.281999  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:54.309089  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:54.421670  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:40:54.442413  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:40:54.515213  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:40:54.534323  246403 provision.go:87] duration metric: took 675.51789ms to configureAuth
	I1020 12:40:54.534349  246403 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:40:54.534533  246403 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:54.534654  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:54.555834  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:54.556087  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:54.556103  246403 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:40:51.468531  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:40:51.468604  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:40:51.496766  236655 cri.go:89] found id: "2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:40:51.496823  236655 cri.go:89] found id: ""
	I1020 12:40:51.496840  236655 logs.go:282] 1 containers: [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]
	I1020 12:40:51.496895  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:51.501349  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:40:51.501418  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:40:51.530546  236655 cri.go:89] found id: ""
	I1020 12:40:51.530577  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.530589  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:40:51.530596  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:40:51.530665  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:40:51.560094  236655 cri.go:89] found id: ""
	I1020 12:40:51.560129  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.560137  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:40:51.560143  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:40:51.560192  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:40:51.591153  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:51.591179  236655 cri.go:89] found id: ""
	I1020 12:40:51.591188  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:40:51.591252  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:51.595779  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:40:51.595843  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:40:51.626368  236655 cri.go:89] found id: ""
	I1020 12:40:51.626399  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.626410  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:40:51.626417  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:40:51.626475  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:40:51.656209  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:51.656234  236655 cri.go:89] found id: ""
	I1020 12:40:51.656242  236655 logs.go:282] 1 containers: [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:40:51.656314  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:51.661042  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:40:51.661113  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:40:51.691358  236655 cri.go:89] found id: ""
	I1020 12:40:51.691382  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.691392  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:40:51.691398  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:40:51.691454  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:40:51.719865  236655 cri.go:89] found id: ""
	I1020 12:40:51.719894  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.719904  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:40:51.719915  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:40:51.719927  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:40:51.752943  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:40:51.752973  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:40:51.827739  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:40:51.827783  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:40:51.844672  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:40:51.844703  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:40:51.905039  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:40:51.905057  236655 logs.go:123] Gathering logs for kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97] ...
	I1020 12:40:51.905082  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:40:51.938119  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:40:51.938149  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:51.981553  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:40:51.981585  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:52.020006  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:40:52.020042  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:40:54.580876  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:54.581250  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:54.581311  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:40:54.581382  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:40:54.615560  236655 cri.go:89] found id: "2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:40:54.615584  236655 cri.go:89] found id: ""
	I1020 12:40:54.615592  236655 logs.go:282] 1 containers: [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]
	I1020 12:40:54.615649  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:54.620340  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:40:54.620415  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:40:54.648549  236655 cri.go:89] found id: ""
	I1020 12:40:54.648577  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.648587  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:40:54.648594  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:40:54.648651  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:40:54.678127  236655 cri.go:89] found id: ""
	I1020 12:40:54.678153  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.678160  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:40:54.678165  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:40:54.678215  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:40:54.708845  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:54.708870  236655 cri.go:89] found id: ""
	I1020 12:40:54.708881  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:40:54.708937  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:54.713757  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:40:54.713857  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:40:54.743864  236655 cri.go:89] found id: ""
	I1020 12:40:54.743892  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.743903  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:40:54.743909  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:40:54.743984  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:40:54.775127  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:54.775155  236655 cri.go:89] found id: ""
	I1020 12:40:54.775165  236655 logs.go:282] 1 containers: [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:40:54.775223  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:54.779594  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:40:54.779656  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:40:54.810620  236655 cri.go:89] found id: ""
	I1020 12:40:54.810650  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.810659  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:40:54.810666  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:40:54.810750  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:40:54.846027  236655 cri.go:89] found id: ""
	I1020 12:40:54.846054  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.846064  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:40:54.846074  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:40:54.846087  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:54.891082  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:40:54.891117  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:54.921076  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:40:54.921120  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:40:54.963381  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:40:54.963425  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:40:54.997225  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:40:54.997262  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:40:55.075836  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:40:55.075874  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:40:55.092557  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:40:55.092591  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1020 12:40:55.301182  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:40:55.301217  246403 machine.go:96] duration metric: took 5.006223618s to provisionDockerMachine
	I1020 12:40:55.301232  246403 start.go:293] postStartSetup for "no-preload-649841" (driver="docker")
	I1020 12:40:55.301246  246403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:40:55.301319  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:40:55.301378  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.322672  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.423433  246403 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:40:55.427173  246403 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:40:55.427209  246403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:40:55.427222  246403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:40:55.427273  246403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:40:55.427353  246403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:40:55.427442  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:40:55.434970  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:40:55.452187  246403 start.go:296] duration metric: took 150.937095ms for postStartSetup
	I1020 12:40:55.452280  246403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:40:55.452324  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.470663  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.569126  246403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:40:55.573826  246403 fix.go:56] duration metric: took 5.682176942s for fixHost
	I1020 12:40:55.573853  246403 start.go:83] releasing machines lock for "no-preload-649841", held for 5.68222778s
	I1020 12:40:55.573919  246403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:40:55.595663  246403 ssh_runner.go:195] Run: cat /version.json
	I1020 12:40:55.595708  246403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:40:55.595722  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.595761  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.614470  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.615487  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.766439  246403 ssh_runner.go:195] Run: systemctl --version
	I1020 12:40:55.773503  246403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:40:55.810619  246403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:40:55.815713  246403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:40:55.815817  246403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:40:55.824671  246403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:40:55.824695  246403 start.go:495] detecting cgroup driver to use...
	I1020 12:40:55.824735  246403 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:40:55.824799  246403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:40:55.840385  246403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:40:55.853670  246403 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:40:55.853741  246403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:40:55.868375  246403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:40:55.881459  246403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:40:55.965285  246403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:40:56.044253  246403 docker.go:234] disabling docker service ...
	I1020 12:40:56.044328  246403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:40:56.058879  246403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:40:56.071356  246403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:40:56.153839  246403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:40:56.237106  246403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:40:56.249881  246403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:40:56.265010  246403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:40:56.265073  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.274147  246403 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:40:56.274215  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.283689  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.292859  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.301869  246403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:40:56.310458  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.320173  246403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.329513  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.338702  246403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:40:56.346367  246403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:40:56.354084  246403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:56.433266  246403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:40:56.543617  246403 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:40:56.543682  246403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:40:56.547784  246403 start.go:563] Will wait 60s for crictl version
	I1020 12:40:56.547843  246403 ssh_runner.go:195] Run: which crictl
	I1020 12:40:56.551562  246403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:40:56.576670  246403 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:40:56.576763  246403 ssh_runner.go:195] Run: crio --version
	I1020 12:40:56.605060  246403 ssh_runner.go:195] Run: crio --version
	I1020 12:40:56.636370  246403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1020 12:40:53.285248  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:40:55.285807  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:40:56.637696  246403 cli_runner.go:164] Run: docker network inspect no-preload-649841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:40:56.656858  246403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:40:56.661099  246403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:40:56.671901  246403 kubeadm.go:883] updating cluster {Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:40:56.672010  246403 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:40:56.672041  246403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:40:56.705922  246403 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:40:56.705943  246403 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:40:56.705950  246403 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:40:56.706072  246403 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-649841 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:40:56.706168  246403 ssh_runner.go:195] Run: crio config
	I1020 12:40:56.753348  246403 cni.go:84] Creating CNI manager for ""
	I1020 12:40:56.753368  246403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:40:56.753382  246403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:40:56.753406  246403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-649841 NodeName:no-preload-649841 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:40:56.753543  246403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-649841"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:40:56.753612  246403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:40:56.762410  246403 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:40:56.762478  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:40:56.770453  246403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 12:40:56.784132  246403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:40:56.797279  246403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1020 12:40:56.810339  246403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:40:56.814217  246403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:40:56.825235  246403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:56.906648  246403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:40:56.931385  246403 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841 for IP: 192.168.85.2
	I1020 12:40:56.931409  246403 certs.go:195] generating shared ca certs ...
	I1020 12:40:56.931432  246403 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:56.931589  246403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:40:56.931646  246403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:40:56.931658  246403 certs.go:257] generating profile certs ...
	I1020 12:40:56.931755  246403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.key
	I1020 12:40:56.931852  246403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key.f7062585
	I1020 12:40:56.931911  246403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key
	I1020 12:40:56.932107  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:40:56.932151  246403 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:40:56.932163  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:40:56.932197  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:40:56.932228  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:40:56.932258  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:40:56.932317  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:40:56.933038  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:40:56.953292  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:40:56.973650  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:40:56.993266  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:40:57.017517  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 12:40:57.036397  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:40:57.054108  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:40:57.072133  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:40:57.090479  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:40:57.108635  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:40:57.127561  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:40:57.145529  246403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:40:57.158416  246403 ssh_runner.go:195] Run: openssl version
	I1020 12:40:57.164759  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:40:57.173699  246403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:40:57.177364  246403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:40:57.177419  246403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:40:57.212538  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:40:57.221201  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:40:57.230062  246403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:40:57.234010  246403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:40:57.234077  246403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:40:57.269185  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:40:57.277502  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:40:57.287166  246403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:40:57.291055  246403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:40:57.291115  246403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:40:57.326998  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:40:57.335446  246403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:40:57.339569  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:40:57.376124  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:40:57.413731  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:40:57.456807  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:40:57.501840  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:40:57.550022  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:40:57.605984  246403 kubeadm.go:400] StartCluster: {Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:57.606106  246403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:40:57.606162  246403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:40:57.641826  246403 cri.go:89] found id: "816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679"
	I1020 12:40:57.641847  246403 cri.go:89] found id: "49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0"
	I1020 12:40:57.641854  246403 cri.go:89] found id: "bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d"
	I1020 12:40:57.641858  246403 cri.go:89] found id: "28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1"
	I1020 12:40:57.641862  246403 cri.go:89] found id: ""
	I1020 12:40:57.641907  246403 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:40:57.655461  246403 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:40:57Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:40:57.655549  246403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:40:57.664124  246403 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:40:57.664145  246403 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:40:57.664189  246403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:40:57.672106  246403 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:40:57.673025  246403 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-649841" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:57.673599  246403 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-649841" cluster setting kubeconfig missing "no-preload-649841" context setting]
	I1020 12:40:57.674474  246403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:57.676494  246403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:40:57.686220  246403 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 12:40:57.686261  246403 kubeadm.go:601] duration metric: took 22.109507ms to restartPrimaryControlPlane
	I1020 12:40:57.686293  246403 kubeadm.go:402] duration metric: took 80.296499ms to StartCluster
	I1020 12:40:57.686315  246403 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:57.686402  246403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:57.688167  246403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:57.688425  246403 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:40:57.688495  246403 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:40:57.688585  246403 addons.go:69] Setting storage-provisioner=true in profile "no-preload-649841"
	I1020 12:40:57.688604  246403 addons.go:238] Setting addon storage-provisioner=true in "no-preload-649841"
	W1020 12:40:57.688615  246403 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:40:57.688644  246403 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:57.688650  246403 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:57.688687  246403 addons.go:69] Setting dashboard=true in profile "no-preload-649841"
	I1020 12:40:57.688707  246403 addons.go:238] Setting addon dashboard=true in "no-preload-649841"
	W1020 12:40:57.688718  246403 addons.go:247] addon dashboard should already be in state true
	I1020 12:40:57.688740  246403 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:57.688925  246403 addons.go:69] Setting default-storageclass=true in profile "no-preload-649841"
	I1020 12:40:57.688953  246403 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-649841"
	I1020 12:40:57.689157  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.689245  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.689253  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.692498  246403 out.go:179] * Verifying Kubernetes components...
	I1020 12:40:57.693933  246403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:57.717912  246403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:40:57.717921  246403 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 12:40:57.718123  246403 addons.go:238] Setting addon default-storageclass=true in "no-preload-649841"
	W1020 12:40:57.718144  246403 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:40:57.718173  246403 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:57.718753  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.719290  246403 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:40:57.719305  246403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:40:57.719352  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:57.720357  246403 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 12:40:57.721382  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 12:40:57.721402  246403 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 12:40:57.721455  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:57.755026  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:57.756763  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:57.758118  246403 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:40:57.758139  246403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:40:57.758190  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:57.787000  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:57.846839  246403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:40:57.860459  246403 node_ready.go:35] waiting up to 6m0s for node "no-preload-649841" to be "Ready" ...
	I1020 12:40:57.874441  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 12:40:57.874483  246403 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 12:40:57.874812  246403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:40:57.888994  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 12:40:57.889033  246403 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 12:40:57.898531  246403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:40:57.907146  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 12:40:57.907178  246403 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 12:40:57.924685  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 12:40:57.924707  246403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 12:40:57.942656  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 12:40:57.942688  246403 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 12:40:57.958095  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 12:40:57.958124  246403 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 12:40:57.972266  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 12:40:57.972291  246403 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 12:40:57.985848  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 12:40:57.985875  246403 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 12:40:57.999144  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:40:57.999171  246403 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 12:40:58.012294  246403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:40:59.461886  246403 node_ready.go:49] node "no-preload-649841" is "Ready"
	I1020 12:40:59.461927  246403 node_ready.go:38] duration metric: took 1.601435642s for node "no-preload-649841" to be "Ready" ...
	I1020 12:40:59.461947  246403 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:40:59.462006  246403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:40:59.965544  246403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.090697152s)
	I1020 12:40:59.965581  246403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067020733s)
	I1020 12:40:59.965676  246403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.953345713s)
	I1020 12:40:59.965707  246403 api_server.go:72] duration metric: took 2.277255271s to wait for apiserver process to appear ...
	I1020 12:40:59.965725  246403 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:40:59.965745  246403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:40:59.968115  246403 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-649841 addons enable metrics-server
	
	I1020 12:40:59.970245  246403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:40:59.970269  246403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:40:59.972316  246403 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1020 12:40:57.788184  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:00.284683  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:02.285085  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:40:59.973645  246403 addons.go:514] duration metric: took 2.28516047s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 12:41:00.466494  246403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:41:00.470936  246403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:41:00.470961  246403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:41:00.966299  246403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:41:00.970351  246403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:41:00.971338  246403 api_server.go:141] control plane version: v1.34.1
	I1020 12:41:00.971363  246403 api_server.go:131] duration metric: took 1.005631068s to wait for apiserver health ...
	I1020 12:41:00.971372  246403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:41:00.975054  246403 system_pods.go:59] 8 kube-system pods found
	I1020 12:41:00.975100  246403 system_pods.go:61] "coredns-66bc5c9577-7d88p" [6c859d9e-5016-485a-adc3-b33089248f2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:41:00.975140  246403 system_pods.go:61] "etcd-no-preload-649841" [01effaac-dc30-4ede-9ffa-db5dd8516ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:41:00.975155  246403 system_pods.go:61] "kindnet-ghtcz" [c057504d-908d-4f7f-995b-0524392b82ff] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1020 12:41:00.975168  246403 system_pods.go:61] "kube-apiserver-no-preload-649841" [604873f7-a274-4c82-97ca-56b8366d80da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:41:00.975179  246403 system_pods.go:61] "kube-controller-manager-no-preload-649841" [45c19792-ae07-4c79-9844-27aa5b1f69e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:41:00.975190  246403 system_pods.go:61] "kube-proxy-6vpwz" [6ef821cc-1bf1-4ded-8a94-d320d898c160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1020 12:41:00.975200  246403 system_pods.go:61] "kube-scheduler-no-preload-649841" [bae232f4-b119-46f1-b7d6-e207bb6229a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:41:00.975210  246403 system_pods.go:61] "storage-provisioner" [7ee83276-3c65-4f28-88df-db5aca9ab40b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:41:00.975220  246403 system_pods.go:74] duration metric: took 3.840898ms to wait for pod list to return data ...
	I1020 12:41:00.975233  246403 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:41:00.977661  246403 default_sa.go:45] found service account: "default"
	I1020 12:41:00.977680  246403 default_sa.go:55] duration metric: took 2.438516ms for default service account to be created ...
	I1020 12:41:00.977688  246403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:41:00.980488  246403 system_pods.go:86] 8 kube-system pods found
	I1020 12:41:00.980511  246403 system_pods.go:89] "coredns-66bc5c9577-7d88p" [6c859d9e-5016-485a-adc3-b33089248f2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:41:00.980519  246403 system_pods.go:89] "etcd-no-preload-649841" [01effaac-dc30-4ede-9ffa-db5dd8516ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:41:00.980526  246403 system_pods.go:89] "kindnet-ghtcz" [c057504d-908d-4f7f-995b-0524392b82ff] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1020 12:41:00.980532  246403 system_pods.go:89] "kube-apiserver-no-preload-649841" [604873f7-a274-4c82-97ca-56b8366d80da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:41:00.980538  246403 system_pods.go:89] "kube-controller-manager-no-preload-649841" [45c19792-ae07-4c79-9844-27aa5b1f69e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:41:00.980547  246403 system_pods.go:89] "kube-proxy-6vpwz" [6ef821cc-1bf1-4ded-8a94-d320d898c160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1020 12:41:00.980553  246403 system_pods.go:89] "kube-scheduler-no-preload-649841" [bae232f4-b119-46f1-b7d6-e207bb6229a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:41:00.980560  246403 system_pods.go:89] "storage-provisioner" [7ee83276-3c65-4f28-88df-db5aca9ab40b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:41:00.980567  246403 system_pods.go:126] duration metric: took 2.874125ms to wait for k8s-apps to be running ...
	I1020 12:41:00.980577  246403 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:41:00.980618  246403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:00.994130  246403 system_svc.go:56] duration metric: took 13.542883ms WaitForService to wait for kubelet
	I1020 12:41:00.994157  246403 kubeadm.go:586] duration metric: took 3.305707481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:41:00.994173  246403 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:41:00.997168  246403 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:41:00.997193  246403 node_conditions.go:123] node cpu capacity is 8
	I1020 12:41:00.997211  246403 node_conditions.go:105] duration metric: took 3.027114ms to run NodePressure ...
	I1020 12:41:00.997224  246403 start.go:241] waiting for startup goroutines ...
	I1020 12:41:00.997230  246403 start.go:246] waiting for cluster config update ...
	I1020 12:41:00.997240  246403 start.go:255] writing updated cluster config ...
	I1020 12:41:00.997508  246403 ssh_runner.go:195] Run: rm -f paused
	I1020 12:41:01.001542  246403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:41:01.005111  246403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7d88p" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:41:03.011436  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:05.166152  236655 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073533737s)
	W1020 12:41:05.166201  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1020 12:41:05.166211  236655 logs.go:123] Gathering logs for kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97] ...
	I1020 12:41:05.166234  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	W1020 12:41:04.286463  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:06.786408  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:05.511223  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:07.511472  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:07.708960  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1020 12:41:09.285547  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:41:10.786084  243047 pod_ready.go:94] pod "coredns-5dd5756b68-c9869" is "Ready"
	I1020 12:41:10.786116  243047 pod_ready.go:86] duration metric: took 32.506942493s for pod "coredns-5dd5756b68-c9869" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.788983  243047 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.792806  243047 pod_ready.go:94] pod "etcd-old-k8s-version-384253" is "Ready"
	I1020 12:41:10.792829  243047 pod_ready.go:86] duration metric: took 3.823204ms for pod "etcd-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.795404  243047 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.799298  243047 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-384253" is "Ready"
	I1020 12:41:10.799324  243047 pod_ready.go:86] duration metric: took 3.893647ms for pod "kube-apiserver-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.801763  243047 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.982339  243047 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-384253" is "Ready"
	I1020 12:41:10.982363  243047 pod_ready.go:86] duration metric: took 180.570941ms for pod "kube-controller-manager-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:11.183421  243047 pod_ready.go:83] waiting for pod "kube-proxy-qfvtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:11.582613  243047 pod_ready.go:94] pod "kube-proxy-qfvtm" is "Ready"
	I1020 12:41:11.582637  243047 pod_ready.go:86] duration metric: took 399.193005ms for pod "kube-proxy-qfvtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:11.783523  243047 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:12.182444  243047 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-384253" is "Ready"
	I1020 12:41:12.182475  243047 pod_ready.go:86] duration metric: took 398.922892ms for pod "kube-scheduler-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:12.182491  243047 pod_ready.go:40] duration metric: took 33.907059268s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:41:12.227008  243047 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1020 12:41:12.229723  243047 out.go:203] 
	W1020 12:41:12.231360  243047 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1020 12:41:12.232683  243047 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1020 12:41:12.234096  243047 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-384253" cluster and "default" namespace by default
	W1020 12:41:10.010491  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:12.010632  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:14.510590  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:12.709875  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 12:41:12.709943  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:12.710009  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:12.737186  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:12.737206  236655 cri.go:89] found id: "2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:41:12.737211  236655 cri.go:89] found id: ""
	I1020 12:41:12.737220  236655 logs.go:282] 2 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]
	I1020 12:41:12.737278  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.741246  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.745179  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:12.745245  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:12.771137  236655 cri.go:89] found id: ""
	I1020 12:41:12.771159  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.771167  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:12.771173  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:12.771224  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:12.799118  236655 cri.go:89] found id: ""
	I1020 12:41:12.799153  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.799161  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:12.799167  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:12.799215  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:12.826247  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:12.826278  236655 cri.go:89] found id: ""
	I1020 12:41:12.826289  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:12.826341  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.830624  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:12.830686  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:12.858505  236655 cri.go:89] found id: ""
	I1020 12:41:12.858529  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.858536  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:12.858542  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:12.858595  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:12.885726  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:12.885745  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:12.885748  236655 cri.go:89] found id: ""
	I1020 12:41:12.885755  236655 logs.go:282] 2 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:41:12.885818  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.889911  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.893711  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:12.893798  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:12.921035  236655 cri.go:89] found id: ""
	I1020 12:41:12.921069  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.921079  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:12.921087  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:12.921143  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:12.948187  236655 cri.go:89] found id: ""
	I1020 12:41:12.948209  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.948216  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:12.948234  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:12.948244  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:12.962707  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:12.962733  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:12.995611  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:12.995639  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:13.024078  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:41:13.024123  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:13.050540  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:13.050566  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:13.120949  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:13.120984  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:17.010351  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:19.510306  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:16.608975  236655 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.487969789s)
	W1020 12:41:16.609008  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58148->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58148->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1020 12:41:16.609021  236655 logs.go:123] Gathering logs for kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97] ...
	I1020 12:41:16.609038  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	W1020 12:41:16.634187  236655 logs.go:130] failed kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97": Process exited with status 1
	stdout:
	
	stderr:
	E1020 12:41:16.632053    1613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist" containerID="2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	time="2025-10-20T12:41:16Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1020 12:41:16.632053    1613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist" containerID="2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	time="2025-10-20T12:41:16Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist"
	
	** /stderr **
	I1020 12:41:16.634211  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:16.634225  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:16.679625  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:16.679655  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:16.722809  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:16.722841  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:19.254460  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:19.254947  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:19.255013  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:19.255061  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:19.281805  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:19.281830  236655 cri.go:89] found id: ""
	I1020 12:41:19.281836  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:19.281887  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.285728  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:19.285818  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:19.313641  236655 cri.go:89] found id: ""
	I1020 12:41:19.313670  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.313680  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:19.313687  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:19.313753  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:19.340841  236655 cri.go:89] found id: ""
	I1020 12:41:19.340868  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.340878  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:19.340886  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:19.340950  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:19.370581  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:19.370605  236655 cri.go:89] found id: ""
	I1020 12:41:19.370615  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:19.370666  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.374613  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:19.374689  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:19.401702  236655 cri.go:89] found id: ""
	I1020 12:41:19.401727  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.401735  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:19.401740  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:19.401817  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:19.430961  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:19.430984  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:19.430989  236655 cri.go:89] found id: ""
	I1020 12:41:19.430999  236655 logs.go:282] 2 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:41:19.431064  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.435218  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.438944  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:19.439003  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:19.465381  236655 cri.go:89] found id: ""
	I1020 12:41:19.465404  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.465411  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:19.465416  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:19.465475  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:19.494003  236655 cri.go:89] found id: ""
	I1020 12:41:19.494031  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.494042  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:19.494060  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:19.494074  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:19.509421  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:19.509447  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:19.570966  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:19.570988  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:41:19.571003  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:19.599266  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:19.599299  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:19.640100  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:19.640129  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:19.671438  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:19.671464  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:19.743574  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:19.743615  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:19.777925  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:19.777963  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:19.824601  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:19.824635  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	W1020 12:41:22.010260  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:24.010940  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:22.354183  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:22.354571  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:22.354619  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:22.354663  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:22.383737  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:22.383765  236655 cri.go:89] found id: ""
	I1020 12:41:22.383787  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:22.383840  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.387910  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:22.387964  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:22.417402  236655 cri.go:89] found id: ""
	I1020 12:41:22.417429  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.417437  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:22.417443  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:22.417499  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:22.445403  236655 cri.go:89] found id: ""
	I1020 12:41:22.445428  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.445436  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:22.445442  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:22.445521  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:22.473543  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:22.473564  236655 cri.go:89] found id: ""
	I1020 12:41:22.473573  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:22.473639  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.478193  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:22.478261  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:22.505761  236655 cri.go:89] found id: ""
	I1020 12:41:22.505804  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.505814  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:22.505822  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:22.505900  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:22.535024  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:22.535047  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:22.535053  236655 cri.go:89] found id: ""
	I1020 12:41:22.535061  236655 logs.go:282] 2 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:41:22.535121  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.539432  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.543339  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:22.543407  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:22.570480  236655 cri.go:89] found id: ""
	I1020 12:41:22.570506  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.570514  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:22.570520  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:22.570591  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:22.598327  236655 cri.go:89] found id: ""
	I1020 12:41:22.598358  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.598370  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:22.598385  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:22.598409  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:22.640397  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:22.640437  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:22.714259  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:22.714312  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:22.761736  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:22.761785  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:22.790948  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:41:22.790979  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:22.818476  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:22.818504  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:22.850368  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:22.850401  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:22.864803  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:22.864830  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:22.921676  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:22.921695  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:22.921709  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:25.455850  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:25.456221  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:25.456267  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:25.456314  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:25.483786  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:25.483814  236655 cri.go:89] found id: ""
	I1020 12:41:25.483823  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:25.483905  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:25.488133  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:25.488208  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:25.516741  236655 cri.go:89] found id: ""
	I1020 12:41:25.516767  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.516804  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:25.516809  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:25.516857  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:25.545053  236655 cri.go:89] found id: ""
	I1020 12:41:25.545075  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.545082  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:25.545087  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:25.545141  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:25.576807  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:25.576831  236655 cri.go:89] found id: ""
	I1020 12:41:25.576840  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:25.576904  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:25.581173  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:25.581334  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:25.608906  236655 cri.go:89] found id: ""
	I1020 12:41:25.608930  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.608940  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:25.608948  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:25.609006  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:25.637415  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:25.637438  236655 cri.go:89] found id: ""
	I1020 12:41:25.637448  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:25.637510  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:25.641644  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:25.641711  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:25.670261  236655 cri.go:89] found id: ""
	I1020 12:41:25.670290  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.670297  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:25.670302  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:25.670355  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:25.699544  236655 cri.go:89] found id: ""
	I1020 12:41:25.699570  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.699582  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:25.699592  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:25.699608  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:25.756208  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:25.756229  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:25.756244  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:25.790015  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:25.790043  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:25.837728  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:25.837760  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:25.867330  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:25.867369  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:25.910479  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:25.910509  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:25.947262  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:25.947301  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:26.019369  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:26.019403  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 20 12:40:54 old-k8s-version-384253 crio[566]: time="2025-10-20T12:40:54.831429794Z" level=info msg="Created container 332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn/kubernetes-dashboard" id=7c4b19d4-562f-47d1-8df0-eb8149507906 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:40:54 old-k8s-version-384253 crio[566]: time="2025-10-20T12:40:54.832378691Z" level=info msg="Starting container: 332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026" id=0cac03c3-7c2e-43ea-a15f-1d072177e347 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:40:54 old-k8s-version-384253 crio[566]: time="2025-10-20T12:40:54.834805013Z" level=info msg="Started container" PID=1727 containerID=332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn/kubernetes-dashboard id=0cac03c3-7c2e-43ea-a15f-1d072177e347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=882cf6f516791d40fae26df2ac842fe0ead8bb59fb3d0c9cd9c4b822ad2e90dd
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.038965865Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=517a85f7-fc79-432e-ad36-32695339b25e name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.039981263Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cdb258b8-de27-4341-b367-e2899de38c04 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.041026794Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e21644c7-9180-4736-aa33-13fdf375eb11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.041170294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.045934461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.046135319Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dd9b6ee19903c89d1ac0b2ad6801de1cac7a053915132c891e073eb1031ba41d/merged/etc/passwd: no such file or directory"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.046163817Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dd9b6ee19903c89d1ac0b2ad6801de1cac7a053915132c891e073eb1031ba41d/merged/etc/group: no such file or directory"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.046460557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.097064125Z" level=info msg="Created container bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8: kube-system/storage-provisioner/storage-provisioner" id=e21644c7-9180-4736-aa33-13fdf375eb11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.09774812Z" level=info msg="Starting container: bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8" id=8f858129-49b2-4126-b480-e6a857fedb11 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.099728059Z" level=info msg="Started container" PID=1751 containerID=bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8 description=kube-system/storage-provisioner/storage-provisioner id=8f858129-49b2-4126-b480-e6a857fedb11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76c0967551336b6cc7205cb2709d4a3034151fd9232478c8cb3d6e8b1da5c2a6
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.932177176Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f62146ac-0957-4b7e-b95f-9fbf57e50eb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.933160077Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b3637eed-387a-4dc3-9c49-ea038fc93b99 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.934210833Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper" id=607803ad-67b4-4538-bd81-253f2dd9de37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.934337946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.939487804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.940024213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.967335993Z" level=info msg="Created container 11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper" id=607803ad-67b4-4538-bd81-253f2dd9de37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.967964488Z" level=info msg="Starting container: 11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe" id=08eeeaa8-0d72-40f2-81a6-2aafdac1b6d2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.969755939Z" level=info msg="Started container" PID=1767 containerID=11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper id=08eeeaa8-0d72-40f2-81a6-2aafdac1b6d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca0bb8bed647f3f6dde7e7eace58339868520e3adab03af999ad782f7a6a32c5
	Oct 20 12:41:12 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:12.052411475Z" level=info msg="Removing container: cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c" id=a6b3ec62-a7a2-453f-934f-7f6ae1327a4a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:41:12 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:12.062440435Z" level=info msg="Removed container cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper" id=a6b3ec62-a7a2-453f-934f-7f6ae1327a4a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	11d85e029478f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   ca0bb8bed647f       dashboard-metrics-scraper-5f989dc9cf-f8g6l       kubernetes-dashboard
	bbb5868220016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   76c0967551336       storage-provisioner                              kube-system
	332105a576843       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   882cf6f516791       kubernetes-dashboard-8694d4445c-cvpnn            kubernetes-dashboard
	a9c9157678ee8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   594a7b87856be       coredns-5dd5756b68-c9869                         kube-system
	82b2ffc8539bb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   35de5113a5c9f       busybox                                          default
	619011c2bcd4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   76c0967551336       storage-provisioner                              kube-system
	e1aadd87abcbc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   bad9d380f1612       kindnet-tr8rl                                    kube-system
	81f8635595c35       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   599f063043909       kube-proxy-qfvtm                                 kube-system
	5e481e30b8ec4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   d8d8d4419482b       kube-scheduler-old-k8s-version-384253            kube-system
	bc8f02baa8770       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   93a32088ed8b2       kube-apiserver-old-k8s-version-384253            kube-system
	e1cc7b6a003ed       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   6823fe23f657b       kube-controller-manager-old-k8s-version-384253   kube-system
	f6c082ba3c5bb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   8e2aaf6801aad       etcd-old-k8s-version-384253                      kube-system
	
	
	==> coredns [a9c9157678ee8818b6613789a87ebd56bf24f6bce34399e3307522241d499bf8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35175 - 16843 "HINFO IN 8920450995706395022.5979181544720036275. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018424577s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-384253
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-384253
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=old-k8s-version-384253
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_39_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-384253
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:41:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-384253
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b6451977-b7d8-4840-89f0-12d79aaa4949
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-c9869                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-384253                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-tr8rl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-384253             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-384253    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-qfvtm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-384253             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f8g6l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cvpnn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-384253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node old-k8s-version-384253 event: Registered Node old-k8s-version-384253 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-384253 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 54s)  kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 54s)  kubelet          Node old-k8s-version-384253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 54s)  kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-384253 event: Registered Node old-k8s-version-384253 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [f6c082ba3c5bb39c9c14011daf9f0b91a04643d84063cc518b4449099b0fd75e] <==
	{"level":"info","ts":"2025-10-20T12:40:34.496185Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-20T12:40:34.496305Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:40:34.496539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:40:34.49632Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:40:34.496677Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:40:34.496793Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:40:34.499117Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-20T12:40:34.499293Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-20T12:40:34.499349Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-20T12:40:34.499443Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-20T12:40:34.499499Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T12:40:35.786982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-20T12:40:35.787034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-20T12:40:35.787071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-20T12:40:35.787086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.787092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.7871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.787108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.78872Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-384253 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-20T12:40:35.78872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:40:35.788748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:40:35.789057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T12:40:35.789094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-20T12:40:35.789928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-20T12:40:35.789946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:41:27 up  1:23,  0 user,  load average: 2.99, 3.38, 2.08
	Linux old-k8s-version-384253 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e1aadd87abcbc99c03699210c5ae4f8e8e1782905fba250d326b688cbbd48f15] <==
	I1020 12:40:37.582274       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:40:37.582514       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1020 12:40:37.582648       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:40:37.582662       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:40:37.582689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:40:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:40:37.784474       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:40:37.785460       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:40:37.785508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:40:37.785687       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:40:38.085700       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:40:38.085727       1 metrics.go:72] Registering metrics
	I1020 12:40:38.085813       1 controller.go:711] "Syncing nftables rules"
	I1020 12:40:47.784974       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:40:47.785060       1 main.go:301] handling current node
	I1020 12:40:57.785952       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:40:57.786006       1 main.go:301] handling current node
	I1020 12:41:07.784366       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:41:07.784401       1 main.go:301] handling current node
	I1020 12:41:17.788626       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:41:17.788654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bc8f02baa8770ba6721a99030f25088261d2c0cd3db222046296ba97c0e0d54e] <==
	I1020 12:40:36.741908       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1020 12:40:36.787910       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1020 12:40:36.787935       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1020 12:40:36.787966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1020 12:40:36.788153       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1020 12:40:36.788740       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:40:36.788899       1 shared_informer.go:318] Caches are synced for configmaps
	I1020 12:40:36.795670       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1020 12:40:36.795695       1 aggregator.go:166] initial CRD sync complete...
	I1020 12:40:36.795701       1 autoregister_controller.go:141] Starting autoregister controller
	I1020 12:40:36.795705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:40:36.795710       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:40:36.819106       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:40:37.616293       1 controller.go:624] quota admission added evaluator for: namespaces
	I1020 12:40:37.648294       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1020 12:40:37.666476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:40:37.675136       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:40:37.682324       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1020 12:40:37.692580       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:40:37.720362       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.233.182"}
	I1020 12:40:37.733703       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.249.34"}
	I1020 12:40:48.896019       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:40:48.898640       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1020 12:40:49.161414       1 controller.go:624] quota admission added evaluator for: endpoints
	I1020 12:40:49.161414       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e1cc7b6a003edbbb90ecfe2f4ca699c5caa7bc9e2e4aab94b226caa3576d4308] <==
	I1020 12:40:49.168353       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 12:40:49.216802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="261.240338ms"
	I1020 12:40:49.216924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.216µs"
	I1020 12:40:49.217264       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f8g6l"
	I1020 12:40:49.217283       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-cvpnn"
	I1020 12:40:49.224151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="268.60968ms"
	I1020 12:40:49.224581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="268.988876ms"
	I1020 12:40:49.237922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.290435ms"
	I1020 12:40:49.238030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.183µs"
	I1020 12:40:49.239570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.367577ms"
	I1020 12:40:49.250003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.416µs"
	I1020 12:40:49.253154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.487523ms"
	I1020 12:40:49.253274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.23µs"
	I1020 12:40:49.485598       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 12:40:49.540304       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 12:40:49.540343       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1020 12:40:52.002427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.169µs"
	I1020 12:40:53.008999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.069µs"
	I1020 12:40:54.014976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.218µs"
	I1020 12:40:55.118810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.51844ms"
	I1020 12:40:55.118916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.059µs"
	I1020 12:41:10.725195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.078694ms"
	I1020 12:41:10.725312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.753µs"
	I1020 12:41:12.062142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.123µs"
	I1020 12:41:19.537335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.009µs"
	
	
	==> kube-proxy [81f8635595c355db5ae5a00afb41d8dd5cb7bff59c4bdad7af60c092966dab72] <==
	I1020 12:40:37.353707       1 server_others.go:69] "Using iptables proxy"
	I1020 12:40:37.364160       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1020 12:40:37.385107       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:40:37.387611       1 server_others.go:152] "Using iptables Proxier"
	I1020 12:40:37.387665       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1020 12:40:37.387674       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1020 12:40:37.387703       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1020 12:40:37.388113       1 server.go:846] "Version info" version="v1.28.0"
	I1020 12:40:37.388137       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:37.388842       1 config.go:188] "Starting service config controller"
	I1020 12:40:37.388880       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1020 12:40:37.388892       1 config.go:97] "Starting endpoint slice config controller"
	I1020 12:40:37.388904       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1020 12:40:37.389450       1 config.go:315] "Starting node config controller"
	I1020 12:40:37.389468       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1020 12:40:37.489918       1 shared_informer.go:318] Caches are synced for node config
	I1020 12:40:37.489945       1 shared_informer.go:318] Caches are synced for service config
	I1020 12:40:37.489965       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e481e30b8ec40735fa2f558bf9dd408ddb9a893973ee6253a8f9996d7dde47c] <==
	I1020 12:40:35.164355       1 serving.go:348] Generated self-signed cert in-memory
	I1020 12:40:36.761356       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1020 12:40:36.761381       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:36.765473       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:40:36.765486       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1020 12:40:36.765501       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 12:40:36.765508       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1020 12:40:36.765518       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:40:36.765540       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1020 12:40:36.766550       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1020 12:40:36.766579       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1020 12:40:36.866435       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1020 12:40:36.866474       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 12:40:36.866434       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381219     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96d6\" (UniqueName: \"kubernetes.io/projected/9a983538-7cec-4083-9feb-24536fead6c9-kube-api-access-p96d6\") pod \"dashboard-metrics-scraper-5f989dc9cf-f8g6l\" (UID: \"9a983538-7cec-4083-9feb-24536fead6c9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l"
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381267     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2rv\" (UniqueName: \"kubernetes.io/projected/3b04a5b6-792d-4f4a-9bc5-1880c814dee0-kube-api-access-qr2rv\") pod \"kubernetes-dashboard-8694d4445c-cvpnn\" (UID: \"3b04a5b6-792d-4f4a-9bc5-1880c814dee0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn"
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381296     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3b04a5b6-792d-4f4a-9bc5-1880c814dee0-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-cvpnn\" (UID: \"3b04a5b6-792d-4f4a-9bc5-1880c814dee0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn"
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381320     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9a983538-7cec-4083-9feb-24536fead6c9-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f8g6l\" (UID: \"9a983538-7cec-4083-9feb-24536fead6c9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l"
	Oct 20 12:40:51 old-k8s-version-384253 kubelet[721]: I1020 12:40:51.990169     721 scope.go:117] "RemoveContainer" containerID="df7922cb8985e9d327fc88ee1d73c558495e7340db782ebda99550fb326fd4b9"
	Oct 20 12:40:52 old-k8s-version-384253 kubelet[721]: I1020 12:40:52.996736     721 scope.go:117] "RemoveContainer" containerID="df7922cb8985e9d327fc88ee1d73c558495e7340db782ebda99550fb326fd4b9"
	Oct 20 12:40:52 old-k8s-version-384253 kubelet[721]: I1020 12:40:52.996967     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:40:52 old-k8s-version-384253 kubelet[721]: E1020 12:40:52.997339     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:40:54 old-k8s-version-384253 kubelet[721]: I1020 12:40:54.002029     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:40:54 old-k8s-version-384253 kubelet[721]: E1020 12:40:54.002361     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:40:55 old-k8s-version-384253 kubelet[721]: I1020 12:40:55.057430     721 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn" podStartSLOduration=0.818706236 podCreationTimestamp="2025-10-20 12:40:49 +0000 UTC" firstStartedPulling="2025-10-20 12:40:49.549963648 +0000 UTC m=+15.709866317" lastFinishedPulling="2025-10-20 12:40:54.788621375 +0000 UTC m=+20.948524056" observedRunningTime="2025-10-20 12:40:55.056835136 +0000 UTC m=+21.216737822" watchObservedRunningTime="2025-10-20 12:40:55.057363975 +0000 UTC m=+21.217266663"
	Oct 20 12:40:59 old-k8s-version-384253 kubelet[721]: I1020 12:40:59.525458     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:40:59 old-k8s-version-384253 kubelet[721]: E1020 12:40:59.525916     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:41:08 old-k8s-version-384253 kubelet[721]: I1020 12:41:08.038454     721 scope.go:117] "RemoveContainer" containerID="619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9"
	Oct 20 12:41:11 old-k8s-version-384253 kubelet[721]: I1020 12:41:11.931473     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:41:12 old-k8s-version-384253 kubelet[721]: I1020 12:41:12.050848     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:41:12 old-k8s-version-384253 kubelet[721]: I1020 12:41:12.051220     721 scope.go:117] "RemoveContainer" containerID="11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	Oct 20 12:41:12 old-k8s-version-384253 kubelet[721]: E1020 12:41:12.051591     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:41:19 old-k8s-version-384253 kubelet[721]: I1020 12:41:19.525402     721 scope.go:117] "RemoveContainer" containerID="11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	Oct 20 12:41:19 old-k8s-version-384253 kubelet[721]: E1020 12:41:19.525792     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:41:24 old-k8s-version-384253 kubelet[721]: I1020 12:41:24.286022     721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: kubelet.service: Consumed 1.437s CPU time.
	
	
	==> kubernetes-dashboard [332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026] <==
	2025/10/20 12:40:54 Starting overwatch
	2025/10/20 12:40:54 Using namespace: kubernetes-dashboard
	2025/10/20 12:40:54 Using in-cluster config to connect to apiserver
	2025/10/20 12:40:54 Using secret token for csrf signing
	2025/10/20 12:40:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:40:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:40:54 Successful initial request to the apiserver, version: v1.28.0
	2025/10/20 12:40:54 Generating JWE encryption key
	2025/10/20 12:40:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:40:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:40:55 Initializing JWE encryption key from synchronized object
	2025/10/20 12:40:55 Creating in-cluster Sidecar client
	2025/10/20 12:40:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:40:55 Serving insecurely on HTTP port: 9090
	2025/10/20 12:41:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9] <==
	I1020 12:40:37.315527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:41:07.319194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8] <==
	I1020 12:41:08.113724       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:41:08.125118       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:41:08.125153       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 12:41:25.560677       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:41:25.560832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a192978-e7b4-438b-8996-16ddc24fec6e", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-384253_91134c2a-6abc-47e9-ad6d-a09b907ee79c became leader
	I1020 12:41:25.560902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-384253_91134c2a-6abc-47e9-ad6d-a09b907ee79c!
	I1020 12:41:25.661229       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-384253_91134c2a-6abc-47e9-ad6d-a09b907ee79c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384253 -n old-k8s-version-384253
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384253 -n old-k8s-version-384253: exit status 2 (316.688579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-384253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-384253
helpers_test.go:243: (dbg) docker inspect old-k8s-version-384253:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3",
	        "Created": "2025-10-20T12:39:15.199417657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:40:27.72062841Z",
	            "FinishedAt": "2025-10-20T12:40:26.877008283Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/hosts",
	        "LogPath": "/var/lib/docker/containers/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3/42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3-json.log",
	        "Name": "/old-k8s-version-384253",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-384253:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-384253",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42a1b3150f06dd1ea9a59584521fe61e4928f06e3d5ae1dacd4ff0cc8d4922e3",
	                "LowerDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90952055fc44c355f2c0ef9297152faad84fdfa9d5aaa20b7f5efe37b11adc6c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-384253",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-384253/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-384253",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-384253",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-384253",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e9976ab8b92ca56f3b6c8d967444c1100008a657aa66c0505e319f362d51cd2",
	            "SandboxKey": "/var/run/docker/netns/1e9976ab8b92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-384253": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:74:66:9d:d2:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "297cbf1591dbfc42eff4519f7180072339a2b6c16821ef2400eadb774f669261",
	                    "EndpointID": "a35726fce365fe0d15ada0107d4feefb2c8606c3d3165ef6642f7f27bcf2857a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-384253",
	                        "42a1b3150f06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253: exit status 2 (325.876851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384253 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-384253 logs -n 25: (1.156447155s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ pause   │ -p pause-918853 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │                     │
	│ delete  │ -p pause-918853                                                                                                                                                                                                                               │ pause-918853              │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ start   │ -p cert-options-418869 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p missing-upgrade-123936                                                                                                                                                                                                                     │ missing-upgrade-123936    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ force-systemd-flag-670413 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p force-systemd-flag-670413                                                                                                                                                                                                                  │ force-systemd-flag-670413 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ cert-options-418869 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ -p cert-options-418869 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p cert-options-418869                                                                                                                                                                                                                        │ cert-options-418869       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:40 UTC │
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841         │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253    │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:40:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:40:49.636056  246403 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:40:49.636325  246403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:40:49.636354  246403 out.go:374] Setting ErrFile to fd 2...
	I1020 12:40:49.636360  246403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:40:49.636535  246403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:40:49.637053  246403 out.go:368] Setting JSON to false
	I1020 12:40:49.638246  246403 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4999,"bootTime":1760959051,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:40:49.638344  246403 start.go:141] virtualization: kvm guest
	I1020 12:40:49.640434  246403 out.go:179] * [no-preload-649841] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:40:49.642414  246403 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:40:49.642418  246403 notify.go:220] Checking for updates...
	I1020 12:40:49.645427  246403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:40:49.647167  246403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:49.648592  246403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:40:49.649884  246403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:40:49.651129  246403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:40:49.653960  246403 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:49.654462  246403 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:40:49.680382  246403 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:40:49.680467  246403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:40:49.752170  246403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:40:49.737545471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:40:49.752328  246403 docker.go:318] overlay module found
	I1020 12:40:49.754839  246403 out.go:179] * Using the docker driver based on existing profile
	I1020 12:40:49.755964  246403 start.go:305] selected driver: docker
	I1020 12:40:49.755981  246403 start.go:925] validating driver "docker" against &{Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:49.756091  246403 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:40:49.756815  246403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:40:49.850275  246403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:40:49.835747759 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:40:49.850704  246403 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:40:49.850752  246403 cni.go:84] Creating CNI manager for ""
	I1020 12:40:49.850829  246403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:40:49.850881  246403 start.go:349] cluster config:
	{Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:49.854333  246403 out.go:179] * Starting "no-preload-649841" primary control-plane node in "no-preload-649841" cluster
	I1020 12:40:49.855499  246403 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:40:49.856923  246403 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:40:49.858535  246403 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:40:49.858660  246403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:40:49.858688  246403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/config.json ...
	I1020 12:40:49.858906  246403 cache.go:107] acquiring lock: {Name:mkaa1533143dfeaa0b848a90ee060b0f610ddc81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.858938  246403 cache.go:107] acquiring lock: {Name:mkb88d6f234305026db9fdbd31e5610d523894ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.858961  246403 cache.go:107] acquiring lock: {Name:mkbd2ddf92e86ddfea2601bdc03463773cf73f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.858986  246403 cache.go:107] acquiring lock: {Name:mkf9ce8a4aa3144f3c913fcb0e60bd670d0bc742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859011  246403 cache.go:107] acquiring lock: {Name:mk5e32941f5c3db81b90197cda93fec283cdb548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859050  246403 cache.go:107] acquiring lock: {Name:mkd5c8fee23b3a6854f408e7a08b0b2884a76ec5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859079  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 12:40:49.859085  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 12:40:49.859097  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 12:40:49.859092  246403 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 181.625µs
	I1020 12:40:49.859105  246403 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 122.665µs
	I1020 12:40:49.859111  246403 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 62.689µs
	I1020 12:40:49.859118  246403 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 12:40:49.859120  246403 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 12:40:49.859040  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1020 12:40:49.859135  246403 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 250.791µs
	I1020 12:40:49.859143  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 12:40:49.859156  246403 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 240.76µs
	I1020 12:40:49.859165  246403 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 12:40:49.859095  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1020 12:40:49.859175  246403 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 168.148µs
	I1020 12:40:49.859121  246403 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 12:40:49.859145  246403 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1020 12:40:49.859182  246403 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 12:40:49.859028  246403 cache.go:107] acquiring lock: {Name:mk1adfad6c98d9549a5f634b54404b0984d1f237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859230  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 12:40:49.859237  246403 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 213.811µs
	I1020 12:40:49.859246  246403 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 12:40:49.859022  246403 cache.go:107] acquiring lock: {Name:mk1c3d7b49aa5031d19fe1d56ce36f186e653c93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.859274  246403 cache.go:115] /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 12:40:49.859282  246403 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 269.899µs
	I1020 12:40:49.859297  246403 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 12:40:49.859321  246403 cache.go:87] Successfully saved all images to host disk.
	I1020 12:40:49.891468  246403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:40:49.891492  246403 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:40:49.891512  246403 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:40:49.891543  246403 start.go:360] acquireMachinesLock for no-preload-649841: {Name:mke74c98c770c485912453347459850ab361dd04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:40:49.891611  246403 start.go:364] duration metric: took 44.39µs to acquireMachinesLock for "no-preload-649841"
	I1020 12:40:49.891635  246403 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:40:49.891641  246403 fix.go:54] fixHost starting: 
	I1020 12:40:49.891944  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:49.918616  246403 fix.go:112] recreateIfNeeded on no-preload-649841: state=Stopped err=<nil>
	W1020 12:40:49.918651  246403 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:40:46.468001  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:46.468408  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:46.967751  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:46.968170  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:47.467835  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:47.468286  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:47.967757  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:47.968216  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:48.467850  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:48.468268  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:48.968023  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:48.968438  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:49.467857  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:49.468219  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:49.967785  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:49.968376  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:50.467855  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:50.468285  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:50.968625  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:50.969057  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	W1020 12:40:48.784897  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:40:50.785257  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:40:49.921997  246403 out.go:252] * Restarting existing docker container for "no-preload-649841" ...
	I1020 12:40:49.922104  246403 cli_runner.go:164] Run: docker start no-preload-649841
	I1020 12:40:50.242092  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:50.268044  246403 kic.go:430] container "no-preload-649841" state is running.
	I1020 12:40:50.268495  246403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:40:50.294690  246403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/config.json ...
	I1020 12:40:50.294973  246403 machine.go:93] provisionDockerMachine start ...
	I1020 12:40:50.295081  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:50.319467  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:50.319818  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:50.319835  246403 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:40:50.320442  246403 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47736->127.0.0.1:33068: read: connection reset by peer
	I1020 12:40:53.480861  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-649841
	
	I1020 12:40:53.480890  246403 ubuntu.go:182] provisioning hostname "no-preload-649841"
	I1020 12:40:53.480951  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:53.504982  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:53.505287  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:53.505304  246403 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-649841 && echo "no-preload-649841" | sudo tee /etc/hostname
	I1020 12:40:53.675481  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-649841
	
	I1020 12:40:53.675587  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:53.700306  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:53.700594  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:53.700621  246403 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-649841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-649841/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-649841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:40:53.858701  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:40:53.858735  246403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:40:53.858762  246403 ubuntu.go:190] setting up certificates
	I1020 12:40:53.858788  246403 provision.go:84] configureAuth start
	I1020 12:40:53.858859  246403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:40:53.882618  246403 provision.go:143] copyHostCerts
	I1020 12:40:53.882687  246403 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:40:53.882707  246403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:40:53.882825  246403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:40:53.882971  246403 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:40:53.882983  246403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:40:53.883048  246403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:40:53.883151  246403 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:40:53.883164  246403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:40:53.883201  246403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:40:53.883312  246403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.no-preload-649841 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-649841]
	I1020 12:40:54.281887  246403 provision.go:177] copyRemoteCerts
	I1020 12:40:54.281955  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:40:54.281999  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:54.309089  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:54.421670  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:40:54.442413  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:40:54.515213  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:40:54.534323  246403 provision.go:87] duration metric: took 675.51789ms to configureAuth
	I1020 12:40:54.534349  246403 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:40:54.534533  246403 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:54.534654  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:54.555834  246403 main.go:141] libmachine: Using SSH client type: native
	I1020 12:40:54.556087  246403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1020 12:40:54.556103  246403 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:40:51.468531  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:40:51.468604  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:40:51.496766  236655 cri.go:89] found id: "2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:40:51.496823  236655 cri.go:89] found id: ""
	I1020 12:40:51.496840  236655 logs.go:282] 1 containers: [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]
	I1020 12:40:51.496895  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:51.501349  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:40:51.501418  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:40:51.530546  236655 cri.go:89] found id: ""
	I1020 12:40:51.530577  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.530589  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:40:51.530596  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:40:51.530665  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:40:51.560094  236655 cri.go:89] found id: ""
	I1020 12:40:51.560129  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.560137  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:40:51.560143  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:40:51.560192  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:40:51.591153  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:51.591179  236655 cri.go:89] found id: ""
	I1020 12:40:51.591188  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:40:51.591252  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:51.595779  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:40:51.595843  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:40:51.626368  236655 cri.go:89] found id: ""
	I1020 12:40:51.626399  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.626410  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:40:51.626417  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:40:51.626475  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:40:51.656209  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:51.656234  236655 cri.go:89] found id: ""
	I1020 12:40:51.656242  236655 logs.go:282] 1 containers: [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:40:51.656314  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:51.661042  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:40:51.661113  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:40:51.691358  236655 cri.go:89] found id: ""
	I1020 12:40:51.691382  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.691392  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:40:51.691398  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:40:51.691454  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:40:51.719865  236655 cri.go:89] found id: ""
	I1020 12:40:51.719894  236655 logs.go:282] 0 containers: []
	W1020 12:40:51.719904  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:40:51.719915  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:40:51.719927  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:40:51.752943  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:40:51.752973  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:40:51.827739  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:40:51.827783  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:40:51.844672  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:40:51.844703  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:40:51.905039  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:40:51.905057  236655 logs.go:123] Gathering logs for kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97] ...
	I1020 12:40:51.905082  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:40:51.938119  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:40:51.938149  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:51.981553  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:40:51.981585  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:52.020006  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:40:52.020042  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:40:54.580876  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:40:54.581250  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:40:54.581311  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:40:54.581382  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:40:54.615560  236655 cri.go:89] found id: "2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:40:54.615584  236655 cri.go:89] found id: ""
	I1020 12:40:54.615592  236655 logs.go:282] 1 containers: [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]
	I1020 12:40:54.615649  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:54.620340  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:40:54.620415  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:40:54.648549  236655 cri.go:89] found id: ""
	I1020 12:40:54.648577  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.648587  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:40:54.648594  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:40:54.648651  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:40:54.678127  236655 cri.go:89] found id: ""
	I1020 12:40:54.678153  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.678160  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:40:54.678165  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:40:54.678215  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:40:54.708845  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:54.708870  236655 cri.go:89] found id: ""
	I1020 12:40:54.708881  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:40:54.708937  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:54.713757  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:40:54.713857  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:40:54.743864  236655 cri.go:89] found id: ""
	I1020 12:40:54.743892  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.743903  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:40:54.743909  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:40:54.743984  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:40:54.775127  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:54.775155  236655 cri.go:89] found id: ""
	I1020 12:40:54.775165  236655 logs.go:282] 1 containers: [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:40:54.775223  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:40:54.779594  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:40:54.779656  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:40:54.810620  236655 cri.go:89] found id: ""
	I1020 12:40:54.810650  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.810659  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:40:54.810666  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:40:54.810750  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:40:54.846027  236655 cri.go:89] found id: ""
	I1020 12:40:54.846054  236655 logs.go:282] 0 containers: []
	W1020 12:40:54.846064  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:40:54.846074  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:40:54.846087  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:40:54.891082  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:40:54.891117  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:40:54.921076  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:40:54.921120  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:40:54.963381  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:40:54.963425  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:40:54.997225  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:40:54.997262  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:40:55.075836  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:40:55.075874  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:40:55.092557  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:40:55.092591  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1020 12:40:55.301182  246403 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:40:55.301217  246403 machine.go:96] duration metric: took 5.006223618s to provisionDockerMachine
	I1020 12:40:55.301232  246403 start.go:293] postStartSetup for "no-preload-649841" (driver="docker")
	I1020 12:40:55.301246  246403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:40:55.301319  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:40:55.301378  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.322672  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.423433  246403 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:40:55.427173  246403 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:40:55.427209  246403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:40:55.427222  246403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:40:55.427273  246403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:40:55.427353  246403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:40:55.427442  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:40:55.434970  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:40:55.452187  246403 start.go:296] duration metric: took 150.937095ms for postStartSetup
	I1020 12:40:55.452280  246403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:40:55.452324  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.470663  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.569126  246403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:40:55.573826  246403 fix.go:56] duration metric: took 5.682176942s for fixHost
	I1020 12:40:55.573853  246403 start.go:83] releasing machines lock for "no-preload-649841", held for 5.68222778s
	I1020 12:40:55.573919  246403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-649841
	I1020 12:40:55.595663  246403 ssh_runner.go:195] Run: cat /version.json
	I1020 12:40:55.595708  246403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:40:55.595722  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.595761  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:55.614470  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.615487  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:55.766439  246403 ssh_runner.go:195] Run: systemctl --version
	I1020 12:40:55.773503  246403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:40:55.810619  246403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:40:55.815713  246403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:40:55.815817  246403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:40:55.824671  246403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:40:55.824695  246403 start.go:495] detecting cgroup driver to use...
	I1020 12:40:55.824735  246403 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:40:55.824799  246403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:40:55.840385  246403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:40:55.853670  246403 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:40:55.853741  246403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:40:55.868375  246403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:40:55.881459  246403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:40:55.965285  246403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:40:56.044253  246403 docker.go:234] disabling docker service ...
	I1020 12:40:56.044328  246403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:40:56.058879  246403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:40:56.071356  246403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:40:56.153839  246403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:40:56.237106  246403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:40:56.249881  246403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:40:56.265010  246403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:40:56.265073  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.274147  246403 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:40:56.274215  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.283689  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.292859  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.301869  246403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:40:56.310458  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.320173  246403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.329513  246403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:40:56.338702  246403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:40:56.346367  246403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:40:56.354084  246403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:56.433266  246403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:40:56.543617  246403 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:40:56.543682  246403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:40:56.547784  246403 start.go:563] Will wait 60s for crictl version
	I1020 12:40:56.547843  246403 ssh_runner.go:195] Run: which crictl
	I1020 12:40:56.551562  246403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:40:56.576670  246403 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:40:56.576763  246403 ssh_runner.go:195] Run: crio --version
	I1020 12:40:56.605060  246403 ssh_runner.go:195] Run: crio --version
	I1020 12:40:56.636370  246403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1020 12:40:53.285248  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:40:55.285807  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:40:56.637696  246403 cli_runner.go:164] Run: docker network inspect no-preload-649841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:40:56.656858  246403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:40:56.661099  246403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:40:56.671901  246403 kubeadm.go:883] updating cluster {Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:40:56.672010  246403 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:40:56.672041  246403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:40:56.705922  246403 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:40:56.705943  246403 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:40:56.705950  246403 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:40:56.706072  246403 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-649841 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:40:56.706168  246403 ssh_runner.go:195] Run: crio config
	I1020 12:40:56.753348  246403 cni.go:84] Creating CNI manager for ""
	I1020 12:40:56.753368  246403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:40:56.753382  246403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:40:56.753406  246403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-649841 NodeName:no-preload-649841 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:40:56.753543  246403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-649841"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:40:56.753612  246403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:40:56.762410  246403 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:40:56.762478  246403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:40:56.770453  246403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 12:40:56.784132  246403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:40:56.797279  246403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1020 12:40:56.810339  246403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:40:56.814217  246403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:40:56.825235  246403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:56.906648  246403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:40:56.931385  246403 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841 for IP: 192.168.85.2
	I1020 12:40:56.931409  246403 certs.go:195] generating shared ca certs ...
	I1020 12:40:56.931432  246403 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:56.931589  246403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:40:56.931646  246403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:40:56.931658  246403 certs.go:257] generating profile certs ...
	I1020 12:40:56.931755  246403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.key
	I1020 12:40:56.931852  246403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key.f7062585
	I1020 12:40:56.931911  246403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key
	I1020 12:40:56.932107  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:40:56.932151  246403 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:40:56.932163  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:40:56.932197  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:40:56.932228  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:40:56.932258  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:40:56.932317  246403 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:40:56.933038  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:40:56.953292  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:40:56.973650  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:40:56.993266  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:40:57.017517  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 12:40:57.036397  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:40:57.054108  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:40:57.072133  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:40:57.090479  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:40:57.108635  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:40:57.127561  246403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:40:57.145529  246403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:40:57.158416  246403 ssh_runner.go:195] Run: openssl version
	I1020 12:40:57.164759  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:40:57.173699  246403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:40:57.177364  246403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:40:57.177419  246403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:40:57.212538  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:40:57.221201  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:40:57.230062  246403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:40:57.234010  246403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:40:57.234077  246403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:40:57.269185  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:40:57.277502  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:40:57.287166  246403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:40:57.291055  246403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:40:57.291115  246403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:40:57.326998  246403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:40:57.335446  246403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:40:57.339569  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:40:57.376124  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:40:57.413731  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:40:57.456807  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:40:57.501840  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:40:57.550022  246403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:40:57.605984  246403 kubeadm.go:400] StartCluster: {Name:no-preload-649841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-649841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:40:57.606106  246403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:40:57.606162  246403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:40:57.641826  246403 cri.go:89] found id: "816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679"
	I1020 12:40:57.641847  246403 cri.go:89] found id: "49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0"
	I1020 12:40:57.641854  246403 cri.go:89] found id: "bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d"
	I1020 12:40:57.641858  246403 cri.go:89] found id: "28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1"
	I1020 12:40:57.641862  246403 cri.go:89] found id: ""
	I1020 12:40:57.641907  246403 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:40:57.655461  246403 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:40:57Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:40:57.655549  246403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:40:57.664124  246403 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:40:57.664145  246403 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:40:57.664189  246403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:40:57.672106  246403 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:40:57.673025  246403 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-649841" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:57.673599  246403 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-649841" cluster setting kubeconfig missing "no-preload-649841" context setting]
	I1020 12:40:57.674474  246403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:57.676494  246403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:40:57.686220  246403 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 12:40:57.686261  246403 kubeadm.go:601] duration metric: took 22.109507ms to restartPrimaryControlPlane
	I1020 12:40:57.686293  246403 kubeadm.go:402] duration metric: took 80.296499ms to StartCluster
	I1020 12:40:57.686315  246403 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:57.686402  246403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:40:57.688167  246403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:40:57.688425  246403 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:40:57.688495  246403 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:40:57.688585  246403 addons.go:69] Setting storage-provisioner=true in profile "no-preload-649841"
	I1020 12:40:57.688604  246403 addons.go:238] Setting addon storage-provisioner=true in "no-preload-649841"
	W1020 12:40:57.688615  246403 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:40:57.688644  246403 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:57.688650  246403 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:40:57.688687  246403 addons.go:69] Setting dashboard=true in profile "no-preload-649841"
	I1020 12:40:57.688707  246403 addons.go:238] Setting addon dashboard=true in "no-preload-649841"
	W1020 12:40:57.688718  246403 addons.go:247] addon dashboard should already be in state true
	I1020 12:40:57.688740  246403 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:57.688925  246403 addons.go:69] Setting default-storageclass=true in profile "no-preload-649841"
	I1020 12:40:57.688953  246403 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-649841"
	I1020 12:40:57.689157  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.689245  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.689253  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.692498  246403 out.go:179] * Verifying Kubernetes components...
	I1020 12:40:57.693933  246403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:40:57.717912  246403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:40:57.717921  246403 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 12:40:57.718123  246403 addons.go:238] Setting addon default-storageclass=true in "no-preload-649841"
	W1020 12:40:57.718144  246403 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:40:57.718173  246403 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:40:57.718753  246403 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:40:57.719290  246403 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:40:57.719305  246403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:40:57.719352  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:57.720357  246403 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 12:40:57.721382  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 12:40:57.721402  246403 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 12:40:57.721455  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:57.755026  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:57.756763  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:57.758118  246403 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:40:57.758139  246403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:40:57.758190  246403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:40:57.787000  246403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:40:57.846839  246403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:40:57.860459  246403 node_ready.go:35] waiting up to 6m0s for node "no-preload-649841" to be "Ready" ...
	I1020 12:40:57.874441  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 12:40:57.874483  246403 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 12:40:57.874812  246403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:40:57.888994  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 12:40:57.889033  246403 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 12:40:57.898531  246403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:40:57.907146  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 12:40:57.907178  246403 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 12:40:57.924685  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 12:40:57.924707  246403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 12:40:57.942656  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 12:40:57.942688  246403 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 12:40:57.958095  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 12:40:57.958124  246403 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 12:40:57.972266  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 12:40:57.972291  246403 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 12:40:57.985848  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 12:40:57.985875  246403 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 12:40:57.999144  246403 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:40:57.999171  246403 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 12:40:58.012294  246403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:40:59.461886  246403 node_ready.go:49] node "no-preload-649841" is "Ready"
	I1020 12:40:59.461927  246403 node_ready.go:38] duration metric: took 1.601435642s for node "no-preload-649841" to be "Ready" ...
	I1020 12:40:59.461947  246403 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:40:59.462006  246403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:40:59.965544  246403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.090697152s)
	I1020 12:40:59.965581  246403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067020733s)
	I1020 12:40:59.965676  246403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.953345713s)
	I1020 12:40:59.965707  246403 api_server.go:72] duration metric: took 2.277255271s to wait for apiserver process to appear ...
	I1020 12:40:59.965725  246403 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:40:59.965745  246403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:40:59.968115  246403 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-649841 addons enable metrics-server
	
	I1020 12:40:59.970245  246403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:40:59.970269  246403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:40:59.972316  246403 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1020 12:40:57.788184  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:00.284683  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:02.285085  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:40:59.973645  246403 addons.go:514] duration metric: took 2.28516047s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 12:41:00.466494  246403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:41:00.470936  246403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:41:00.470961  246403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:41:00.966299  246403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:41:00.970351  246403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:41:00.971338  246403 api_server.go:141] control plane version: v1.34.1
	I1020 12:41:00.971363  246403 api_server.go:131] duration metric: took 1.005631068s to wait for apiserver health ...
	I1020 12:41:00.971372  246403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:41:00.975054  246403 system_pods.go:59] 8 kube-system pods found
	I1020 12:41:00.975100  246403 system_pods.go:61] "coredns-66bc5c9577-7d88p" [6c859d9e-5016-485a-adc3-b33089248f2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:41:00.975140  246403 system_pods.go:61] "etcd-no-preload-649841" [01effaac-dc30-4ede-9ffa-db5dd8516ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:41:00.975155  246403 system_pods.go:61] "kindnet-ghtcz" [c057504d-908d-4f7f-995b-0524392b82ff] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1020 12:41:00.975168  246403 system_pods.go:61] "kube-apiserver-no-preload-649841" [604873f7-a274-4c82-97ca-56b8366d80da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:41:00.975179  246403 system_pods.go:61] "kube-controller-manager-no-preload-649841" [45c19792-ae07-4c79-9844-27aa5b1f69e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:41:00.975190  246403 system_pods.go:61] "kube-proxy-6vpwz" [6ef821cc-1bf1-4ded-8a94-d320d898c160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1020 12:41:00.975200  246403 system_pods.go:61] "kube-scheduler-no-preload-649841" [bae232f4-b119-46f1-b7d6-e207bb6229a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:41:00.975210  246403 system_pods.go:61] "storage-provisioner" [7ee83276-3c65-4f28-88df-db5aca9ab40b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:41:00.975220  246403 system_pods.go:74] duration metric: took 3.840898ms to wait for pod list to return data ...
	I1020 12:41:00.975233  246403 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:41:00.977661  246403 default_sa.go:45] found service account: "default"
	I1020 12:41:00.977680  246403 default_sa.go:55] duration metric: took 2.438516ms for default service account to be created ...
	I1020 12:41:00.977688  246403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:41:00.980488  246403 system_pods.go:86] 8 kube-system pods found
	I1020 12:41:00.980511  246403 system_pods.go:89] "coredns-66bc5c9577-7d88p" [6c859d9e-5016-485a-adc3-b33089248f2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:41:00.980519  246403 system_pods.go:89] "etcd-no-preload-649841" [01effaac-dc30-4ede-9ffa-db5dd8516ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:41:00.980526  246403 system_pods.go:89] "kindnet-ghtcz" [c057504d-908d-4f7f-995b-0524392b82ff] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1020 12:41:00.980532  246403 system_pods.go:89] "kube-apiserver-no-preload-649841" [604873f7-a274-4c82-97ca-56b8366d80da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:41:00.980538  246403 system_pods.go:89] "kube-controller-manager-no-preload-649841" [45c19792-ae07-4c79-9844-27aa5b1f69e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:41:00.980547  246403 system_pods.go:89] "kube-proxy-6vpwz" [6ef821cc-1bf1-4ded-8a94-d320d898c160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1020 12:41:00.980553  246403 system_pods.go:89] "kube-scheduler-no-preload-649841" [bae232f4-b119-46f1-b7d6-e207bb6229a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:41:00.980560  246403 system_pods.go:89] "storage-provisioner" [7ee83276-3c65-4f28-88df-db5aca9ab40b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:41:00.980567  246403 system_pods.go:126] duration metric: took 2.874125ms to wait for k8s-apps to be running ...
	I1020 12:41:00.980577  246403 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:41:00.980618  246403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:00.994130  246403 system_svc.go:56] duration metric: took 13.542883ms WaitForService to wait for kubelet
	I1020 12:41:00.994157  246403 kubeadm.go:586] duration metric: took 3.305707481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:41:00.994173  246403 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:41:00.997168  246403 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:41:00.997193  246403 node_conditions.go:123] node cpu capacity is 8
	I1020 12:41:00.997211  246403 node_conditions.go:105] duration metric: took 3.027114ms to run NodePressure ...
	I1020 12:41:00.997224  246403 start.go:241] waiting for startup goroutines ...
	I1020 12:41:00.997230  246403 start.go:246] waiting for cluster config update ...
	I1020 12:41:00.997240  246403 start.go:255] writing updated cluster config ...
	I1020 12:41:00.997508  246403 ssh_runner.go:195] Run: rm -f paused
	I1020 12:41:01.001542  246403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:41:01.005111  246403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7d88p" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:41:03.011436  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:05.166152  236655 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073533737s)
	W1020 12:41:05.166201  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1020 12:41:05.166211  236655 logs.go:123] Gathering logs for kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97] ...
	I1020 12:41:05.166234  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	W1020 12:41:04.286463  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:06.786408  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	W1020 12:41:05.511223  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:07.511472  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:07.708960  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1020 12:41:09.285547  243047 pod_ready.go:104] pod "coredns-5dd5756b68-c9869" is not "Ready", error: <nil>
	I1020 12:41:10.786084  243047 pod_ready.go:94] pod "coredns-5dd5756b68-c9869" is "Ready"
	I1020 12:41:10.786116  243047 pod_ready.go:86] duration metric: took 32.506942493s for pod "coredns-5dd5756b68-c9869" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.788983  243047 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.792806  243047 pod_ready.go:94] pod "etcd-old-k8s-version-384253" is "Ready"
	I1020 12:41:10.792829  243047 pod_ready.go:86] duration metric: took 3.823204ms for pod "etcd-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.795404  243047 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.799298  243047 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-384253" is "Ready"
	I1020 12:41:10.799324  243047 pod_ready.go:86] duration metric: took 3.893647ms for pod "kube-apiserver-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.801763  243047 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:10.982339  243047 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-384253" is "Ready"
	I1020 12:41:10.982363  243047 pod_ready.go:86] duration metric: took 180.570941ms for pod "kube-controller-manager-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:11.183421  243047 pod_ready.go:83] waiting for pod "kube-proxy-qfvtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:11.582613  243047 pod_ready.go:94] pod "kube-proxy-qfvtm" is "Ready"
	I1020 12:41:11.582637  243047 pod_ready.go:86] duration metric: took 399.193005ms for pod "kube-proxy-qfvtm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:11.783523  243047 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:12.182444  243047 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-384253" is "Ready"
	I1020 12:41:12.182475  243047 pod_ready.go:86] duration metric: took 398.922892ms for pod "kube-scheduler-old-k8s-version-384253" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:12.182491  243047 pod_ready.go:40] duration metric: took 33.907059268s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:41:12.227008  243047 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1020 12:41:12.229723  243047 out.go:203] 
	W1020 12:41:12.231360  243047 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1020 12:41:12.232683  243047 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1020 12:41:12.234096  243047 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-384253" cluster and "default" namespace by default
	W1020 12:41:10.010491  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:12.010632  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:14.510590  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:12.709875  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 12:41:12.709943  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:12.710009  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:12.737186  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:12.737206  236655 cri.go:89] found id: "2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	I1020 12:41:12.737211  236655 cri.go:89] found id: ""
	I1020 12:41:12.737220  236655 logs.go:282] 2 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]
	I1020 12:41:12.737278  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.741246  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.745179  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:12.745245  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:12.771137  236655 cri.go:89] found id: ""
	I1020 12:41:12.771159  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.771167  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:12.771173  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:12.771224  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:12.799118  236655 cri.go:89] found id: ""
	I1020 12:41:12.799153  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.799161  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:12.799167  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:12.799215  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:12.826247  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:12.826278  236655 cri.go:89] found id: ""
	I1020 12:41:12.826289  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:12.826341  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.830624  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:12.830686  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:12.858505  236655 cri.go:89] found id: ""
	I1020 12:41:12.858529  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.858536  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:12.858542  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:12.858595  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:12.885726  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:12.885745  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:12.885748  236655 cri.go:89] found id: ""
	I1020 12:41:12.885755  236655 logs.go:282] 2 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:41:12.885818  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.889911  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:12.893711  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:12.893798  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:12.921035  236655 cri.go:89] found id: ""
	I1020 12:41:12.921069  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.921079  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:12.921087  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:12.921143  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:12.948187  236655 cri.go:89] found id: ""
	I1020 12:41:12.948209  236655 logs.go:282] 0 containers: []
	W1020 12:41:12.948216  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:12.948234  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:12.948244  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:12.962707  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:12.962733  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:12.995611  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:12.995639  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:13.024078  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:41:13.024123  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:13.050540  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:13.050566  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:13.120949  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:13.120984  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:17.010351  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:19.510306  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:16.608975  236655 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.487969789s)
	W1020 12:41:16.609008  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58148->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58148->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1020 12:41:16.609021  236655 logs.go:123] Gathering logs for kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97] ...
	I1020 12:41:16.609038  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	W1020 12:41:16.634187  236655 logs.go:130] failed kube-apiserver [2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97": Process exited with status 1
	stdout:
	
	stderr:
	E1020 12:41:16.632053    1613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist" containerID="2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	time="2025-10-20T12:41:16Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1020 12:41:16.632053    1613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist" containerID="2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97"
	time="2025-10-20T12:41:16Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97\": container with ID starting with 2816a83b333c38de1c37c363a60d7089a5a0eab3d76c470d82437093cefb7c97 not found: ID does not exist"
	
	** /stderr **
	I1020 12:41:16.634211  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:16.634225  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:16.679625  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:16.679655  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:16.722809  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:16.722841  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:19.254460  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:19.254947  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:19.255013  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:19.255061  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:19.281805  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:19.281830  236655 cri.go:89] found id: ""
	I1020 12:41:19.281836  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:19.281887  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.285728  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:19.285818  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:19.313641  236655 cri.go:89] found id: ""
	I1020 12:41:19.313670  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.313680  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:19.313687  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:19.313753  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:19.340841  236655 cri.go:89] found id: ""
	I1020 12:41:19.340868  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.340878  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:19.340886  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:19.340950  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:19.370581  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:19.370605  236655 cri.go:89] found id: ""
	I1020 12:41:19.370615  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:19.370666  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.374613  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:19.374689  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:19.401702  236655 cri.go:89] found id: ""
	I1020 12:41:19.401727  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.401735  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:19.401740  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:19.401817  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:19.430961  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:19.430984  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:19.430989  236655 cri.go:89] found id: ""
	I1020 12:41:19.430999  236655 logs.go:282] 2 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:41:19.431064  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.435218  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:19.438944  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:19.439003  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:19.465381  236655 cri.go:89] found id: ""
	I1020 12:41:19.465404  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.465411  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:19.465416  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:19.465475  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:19.494003  236655 cri.go:89] found id: ""
	I1020 12:41:19.494031  236655 logs.go:282] 0 containers: []
	W1020 12:41:19.494042  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:19.494060  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:19.494074  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:19.509421  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:19.509447  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:19.570966  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:19.570988  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:41:19.571003  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:19.599266  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:19.599299  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:19.640100  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:19.640129  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:19.671438  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:19.671464  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:19.743574  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:19.743615  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:19.777925  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:19.777963  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:19.824601  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:19.824635  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	W1020 12:41:22.010260  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	W1020 12:41:24.010940  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:22.354183  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:22.354571  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:22.354619  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:22.354663  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:22.383737  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:22.383765  236655 cri.go:89] found id: ""
	I1020 12:41:22.383787  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:22.383840  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.387910  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:22.387964  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:22.417402  236655 cri.go:89] found id: ""
	I1020 12:41:22.417429  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.417437  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:22.417443  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:22.417499  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:22.445403  236655 cri.go:89] found id: ""
	I1020 12:41:22.445428  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.445436  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:22.445442  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:22.445521  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:22.473543  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:22.473564  236655 cri.go:89] found id: ""
	I1020 12:41:22.473573  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:22.473639  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.478193  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:22.478261  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:22.505761  236655 cri.go:89] found id: ""
	I1020 12:41:22.505804  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.505814  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:22.505822  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:22.505900  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:22.535024  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:22.535047  236655 cri.go:89] found id: "c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:22.535053  236655 cri.go:89] found id: ""
	I1020 12:41:22.535061  236655 logs.go:282] 2 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc]
	I1020 12:41:22.535121  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.539432  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:22.543339  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:22.543407  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:22.570480  236655 cri.go:89] found id: ""
	I1020 12:41:22.570506  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.570514  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:22.570520  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:22.570591  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:22.598327  236655 cri.go:89] found id: ""
	I1020 12:41:22.598358  236655 logs.go:282] 0 containers: []
	W1020 12:41:22.598370  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:22.598385  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:22.598409  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:22.640397  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:22.640437  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:22.714259  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:22.714312  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:22.761736  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:22.761785  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:22.790948  236655 logs.go:123] Gathering logs for kube-controller-manager [c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc] ...
	I1020 12:41:22.790979  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c06a1440cbb217ba05917fee2cca27fab4ff87adf83785f9358bffadedd693cc"
	I1020 12:41:22.818476  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:22.818504  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:22.850368  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:22.850401  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:22.864803  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:22.864830  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:22.921676  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:22.921695  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:22.921709  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:25.455850  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:25.456221  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:25.456267  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:25.456314  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:25.483786  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:25.483814  236655 cri.go:89] found id: ""
	I1020 12:41:25.483823  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:25.483905  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:25.488133  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:25.488208  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:25.516741  236655 cri.go:89] found id: ""
	I1020 12:41:25.516767  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.516804  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:25.516809  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:25.516857  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:25.545053  236655 cri.go:89] found id: ""
	I1020 12:41:25.545075  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.545082  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:25.545087  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:25.545141  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:25.576807  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:25.576831  236655 cri.go:89] found id: ""
	I1020 12:41:25.576840  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:25.576904  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:25.581173  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:25.581334  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:25.608906  236655 cri.go:89] found id: ""
	I1020 12:41:25.608930  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.608940  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:25.608948  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:25.609006  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:25.637415  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:25.637438  236655 cri.go:89] found id: ""
	I1020 12:41:25.637448  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:25.637510  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:25.641644  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:25.641711  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:25.670261  236655 cri.go:89] found id: ""
	I1020 12:41:25.670290  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.670297  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:25.670302  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:25.670355  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:25.699544  236655 cri.go:89] found id: ""
	I1020 12:41:25.699570  236655 logs.go:282] 0 containers: []
	W1020 12:41:25.699582  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:25.699592  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:25.699608  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:25.756208  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:25.756229  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:25.756244  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:25.790015  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:25.790043  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:25.837728  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:25.837760  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:25.867330  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:25.867369  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:25.910479  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:25.910509  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:25.947262  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:25.947301  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:26.019369  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:26.019403  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 20 12:40:54 old-k8s-version-384253 crio[566]: time="2025-10-20T12:40:54.831429794Z" level=info msg="Created container 332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn/kubernetes-dashboard" id=7c4b19d4-562f-47d1-8df0-eb8149507906 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:40:54 old-k8s-version-384253 crio[566]: time="2025-10-20T12:40:54.832378691Z" level=info msg="Starting container: 332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026" id=0cac03c3-7c2e-43ea-a15f-1d072177e347 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:40:54 old-k8s-version-384253 crio[566]: time="2025-10-20T12:40:54.834805013Z" level=info msg="Started container" PID=1727 containerID=332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn/kubernetes-dashboard id=0cac03c3-7c2e-43ea-a15f-1d072177e347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=882cf6f516791d40fae26df2ac842fe0ead8bb59fb3d0c9cd9c4b822ad2e90dd
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.038965865Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=517a85f7-fc79-432e-ad36-32695339b25e name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.039981263Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cdb258b8-de27-4341-b367-e2899de38c04 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.041026794Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e21644c7-9180-4736-aa33-13fdf375eb11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.041170294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.045934461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.046135319Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dd9b6ee19903c89d1ac0b2ad6801de1cac7a053915132c891e073eb1031ba41d/merged/etc/passwd: no such file or directory"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.046163817Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dd9b6ee19903c89d1ac0b2ad6801de1cac7a053915132c891e073eb1031ba41d/merged/etc/group: no such file or directory"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.046460557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.097064125Z" level=info msg="Created container bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8: kube-system/storage-provisioner/storage-provisioner" id=e21644c7-9180-4736-aa33-13fdf375eb11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.09774812Z" level=info msg="Starting container: bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8" id=8f858129-49b2-4126-b480-e6a857fedb11 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:08 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:08.099728059Z" level=info msg="Started container" PID=1751 containerID=bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8 description=kube-system/storage-provisioner/storage-provisioner id=8f858129-49b2-4126-b480-e6a857fedb11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76c0967551336b6cc7205cb2709d4a3034151fd9232478c8cb3d6e8b1da5c2a6
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.932177176Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f62146ac-0957-4b7e-b95f-9fbf57e50eb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.933160077Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b3637eed-387a-4dc3-9c49-ea038fc93b99 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.934210833Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper" id=607803ad-67b4-4538-bd81-253f2dd9de37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.934337946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.939487804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.940024213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.967335993Z" level=info msg="Created container 11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper" id=607803ad-67b4-4538-bd81-253f2dd9de37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.967964488Z" level=info msg="Starting container: 11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe" id=08eeeaa8-0d72-40f2-81a6-2aafdac1b6d2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:11 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:11.969755939Z" level=info msg="Started container" PID=1767 containerID=11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper id=08eeeaa8-0d72-40f2-81a6-2aafdac1b6d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca0bb8bed647f3f6dde7e7eace58339868520e3adab03af999ad782f7a6a32c5
	Oct 20 12:41:12 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:12.052411475Z" level=info msg="Removing container: cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c" id=a6b3ec62-a7a2-453f-934f-7f6ae1327a4a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:41:12 old-k8s-version-384253 crio[566]: time="2025-10-20T12:41:12.062440435Z" level=info msg="Removed container cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l/dashboard-metrics-scraper" id=a6b3ec62-a7a2-453f-934f-7f6ae1327a4a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	11d85e029478f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   ca0bb8bed647f       dashboard-metrics-scraper-5f989dc9cf-f8g6l       kubernetes-dashboard
	bbb5868220016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   76c0967551336       storage-provisioner                              kube-system
	332105a576843       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   882cf6f516791       kubernetes-dashboard-8694d4445c-cvpnn            kubernetes-dashboard
	a9c9157678ee8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   594a7b87856be       coredns-5dd5756b68-c9869                         kube-system
	82b2ffc8539bb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   35de5113a5c9f       busybox                                          default
	619011c2bcd4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   76c0967551336       storage-provisioner                              kube-system
	e1aadd87abcbc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   bad9d380f1612       kindnet-tr8rl                                    kube-system
	81f8635595c35       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   599f063043909       kube-proxy-qfvtm                                 kube-system
	5e481e30b8ec4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   d8d8d4419482b       kube-scheduler-old-k8s-version-384253            kube-system
	bc8f02baa8770       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   93a32088ed8b2       kube-apiserver-old-k8s-version-384253            kube-system
	e1cc7b6a003ed       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   6823fe23f657b       kube-controller-manager-old-k8s-version-384253   kube-system
	f6c082ba3c5bb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   8e2aaf6801aad       etcd-old-k8s-version-384253                      kube-system
	
	
	==> coredns [a9c9157678ee8818b6613789a87ebd56bf24f6bce34399e3307522241d499bf8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35175 - 16843 "HINFO IN 8920450995706395022.5979181544720036275. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018424577s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-384253
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-384253
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=old-k8s-version-384253
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_39_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-384253
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:41:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:41:07 +0000   Mon, 20 Oct 2025 12:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-384253
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b6451977-b7d8-4840-89f0-12d79aaa4949
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-c9869                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-384253                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-tr8rl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-384253             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-384253    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-qfvtm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-384253             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f8g6l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cvpnn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-384253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-384253 event: Registered Node old-k8s-version-384253 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-384253 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 56s)  kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 56s)  kubelet          Node old-k8s-version-384253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 56s)  kubelet          Node old-k8s-version-384253 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-384253 event: Registered Node old-k8s-version-384253 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [f6c082ba3c5bb39c9c14011daf9f0b91a04643d84063cc518b4449099b0fd75e] <==
	{"level":"info","ts":"2025-10-20T12:40:34.496185Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-20T12:40:34.496305Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:40:34.496539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:40:34.49632Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:40:34.496677Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:40:34.496793Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:40:34.499117Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-20T12:40:34.499293Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-20T12:40:34.499349Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-20T12:40:34.499443Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-20T12:40:34.499499Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T12:40:35.786982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-20T12:40:35.787034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-20T12:40:35.787071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-20T12:40:35.787086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.787092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.7871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.787108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-20T12:40:35.78872Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-384253 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-20T12:40:35.78872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:40:35.788748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:40:35.789057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T12:40:35.789094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-20T12:40:35.789928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-20T12:40:35.789946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:41:29 up  1:23,  0 user,  load average: 2.75, 3.33, 2.07
	Linux old-k8s-version-384253 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e1aadd87abcbc99c03699210c5ae4f8e8e1782905fba250d326b688cbbd48f15] <==
	I1020 12:40:37.582274       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:40:37.582514       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1020 12:40:37.582648       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:40:37.582662       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:40:37.582689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:40:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:40:37.784474       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:40:37.785460       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:40:37.785508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:40:37.785687       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:40:38.085700       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:40:38.085727       1 metrics.go:72] Registering metrics
	I1020 12:40:38.085813       1 controller.go:711] "Syncing nftables rules"
	I1020 12:40:47.784974       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:40:47.785060       1 main.go:301] handling current node
	I1020 12:40:57.785952       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:40:57.786006       1 main.go:301] handling current node
	I1020 12:41:07.784366       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:41:07.784401       1 main.go:301] handling current node
	I1020 12:41:17.788626       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:41:17.788654       1 main.go:301] handling current node
	I1020 12:41:27.791133       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:41:27.791170       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bc8f02baa8770ba6721a99030f25088261d2c0cd3db222046296ba97c0e0d54e] <==
	I1020 12:40:36.741908       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1020 12:40:36.787910       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1020 12:40:36.787935       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1020 12:40:36.787966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1020 12:40:36.788153       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1020 12:40:36.788740       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:40:36.788899       1 shared_informer.go:318] Caches are synced for configmaps
	I1020 12:40:36.795670       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1020 12:40:36.795695       1 aggregator.go:166] initial CRD sync complete...
	I1020 12:40:36.795701       1 autoregister_controller.go:141] Starting autoregister controller
	I1020 12:40:36.795705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:40:36.795710       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:40:36.819106       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:40:37.616293       1 controller.go:624] quota admission added evaluator for: namespaces
	I1020 12:40:37.648294       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1020 12:40:37.666476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:40:37.675136       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:40:37.682324       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1020 12:40:37.692580       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:40:37.720362       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.233.182"}
	I1020 12:40:37.733703       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.249.34"}
	I1020 12:40:48.896019       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:40:48.898640       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1020 12:40:49.161414       1 controller.go:624] quota admission added evaluator for: endpoints
	I1020 12:40:49.161414       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e1cc7b6a003edbbb90ecfe2f4ca699c5caa7bc9e2e4aab94b226caa3576d4308] <==
	I1020 12:40:49.168353       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 12:40:49.216802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="261.240338ms"
	I1020 12:40:49.216924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.216µs"
	I1020 12:40:49.217264       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f8g6l"
	I1020 12:40:49.217283       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-cvpnn"
	I1020 12:40:49.224151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="268.60968ms"
	I1020 12:40:49.224581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="268.988876ms"
	I1020 12:40:49.237922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.290435ms"
	I1020 12:40:49.238030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.183µs"
	I1020 12:40:49.239570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.367577ms"
	I1020 12:40:49.250003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.416µs"
	I1020 12:40:49.253154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.487523ms"
	I1020 12:40:49.253274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.23µs"
	I1020 12:40:49.485598       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 12:40:49.540304       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 12:40:49.540343       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1020 12:40:52.002427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.169µs"
	I1020 12:40:53.008999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.069µs"
	I1020 12:40:54.014976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.218µs"
	I1020 12:40:55.118810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.51844ms"
	I1020 12:40:55.118916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.059µs"
	I1020 12:41:10.725195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.078694ms"
	I1020 12:41:10.725312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.753µs"
	I1020 12:41:12.062142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.123µs"
	I1020 12:41:19.537335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.009µs"
	
	
	==> kube-proxy [81f8635595c355db5ae5a00afb41d8dd5cb7bff59c4bdad7af60c092966dab72] <==
	I1020 12:40:37.353707       1 server_others.go:69] "Using iptables proxy"
	I1020 12:40:37.364160       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1020 12:40:37.385107       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:40:37.387611       1 server_others.go:152] "Using iptables Proxier"
	I1020 12:40:37.387665       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1020 12:40:37.387674       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1020 12:40:37.387703       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1020 12:40:37.388113       1 server.go:846] "Version info" version="v1.28.0"
	I1020 12:40:37.388137       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:37.388842       1 config.go:188] "Starting service config controller"
	I1020 12:40:37.388880       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1020 12:40:37.388892       1 config.go:97] "Starting endpoint slice config controller"
	I1020 12:40:37.388904       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1020 12:40:37.389450       1 config.go:315] "Starting node config controller"
	I1020 12:40:37.389468       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1020 12:40:37.489918       1 shared_informer.go:318] Caches are synced for node config
	I1020 12:40:37.489945       1 shared_informer.go:318] Caches are synced for service config
	I1020 12:40:37.489965       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e481e30b8ec40735fa2f558bf9dd408ddb9a893973ee6253a8f9996d7dde47c] <==
	I1020 12:40:35.164355       1 serving.go:348] Generated self-signed cert in-memory
	I1020 12:40:36.761356       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1020 12:40:36.761381       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:36.765473       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:40:36.765486       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1020 12:40:36.765501       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 12:40:36.765508       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1020 12:40:36.765518       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:40:36.765540       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1020 12:40:36.766550       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1020 12:40:36.766579       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1020 12:40:36.866435       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1020 12:40:36.866474       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 12:40:36.866434       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381219     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96d6\" (UniqueName: \"kubernetes.io/projected/9a983538-7cec-4083-9feb-24536fead6c9-kube-api-access-p96d6\") pod \"dashboard-metrics-scraper-5f989dc9cf-f8g6l\" (UID: \"9a983538-7cec-4083-9feb-24536fead6c9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l"
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381267     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2rv\" (UniqueName: \"kubernetes.io/projected/3b04a5b6-792d-4f4a-9bc5-1880c814dee0-kube-api-access-qr2rv\") pod \"kubernetes-dashboard-8694d4445c-cvpnn\" (UID: \"3b04a5b6-792d-4f4a-9bc5-1880c814dee0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn"
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381296     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3b04a5b6-792d-4f4a-9bc5-1880c814dee0-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-cvpnn\" (UID: \"3b04a5b6-792d-4f4a-9bc5-1880c814dee0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn"
	Oct 20 12:40:49 old-k8s-version-384253 kubelet[721]: I1020 12:40:49.381320     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9a983538-7cec-4083-9feb-24536fead6c9-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f8g6l\" (UID: \"9a983538-7cec-4083-9feb-24536fead6c9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l"
	Oct 20 12:40:51 old-k8s-version-384253 kubelet[721]: I1020 12:40:51.990169     721 scope.go:117] "RemoveContainer" containerID="df7922cb8985e9d327fc88ee1d73c558495e7340db782ebda99550fb326fd4b9"
	Oct 20 12:40:52 old-k8s-version-384253 kubelet[721]: I1020 12:40:52.996736     721 scope.go:117] "RemoveContainer" containerID="df7922cb8985e9d327fc88ee1d73c558495e7340db782ebda99550fb326fd4b9"
	Oct 20 12:40:52 old-k8s-version-384253 kubelet[721]: I1020 12:40:52.996967     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:40:52 old-k8s-version-384253 kubelet[721]: E1020 12:40:52.997339     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:40:54 old-k8s-version-384253 kubelet[721]: I1020 12:40:54.002029     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:40:54 old-k8s-version-384253 kubelet[721]: E1020 12:40:54.002361     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:40:55 old-k8s-version-384253 kubelet[721]: I1020 12:40:55.057430     721 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cvpnn" podStartSLOduration=0.818706236 podCreationTimestamp="2025-10-20 12:40:49 +0000 UTC" firstStartedPulling="2025-10-20 12:40:49.549963648 +0000 UTC m=+15.709866317" lastFinishedPulling="2025-10-20 12:40:54.788621375 +0000 UTC m=+20.948524056" observedRunningTime="2025-10-20 12:40:55.056835136 +0000 UTC m=+21.216737822" watchObservedRunningTime="2025-10-20 12:40:55.057363975 +0000 UTC m=+21.217266663"
	Oct 20 12:40:59 old-k8s-version-384253 kubelet[721]: I1020 12:40:59.525458     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:40:59 old-k8s-version-384253 kubelet[721]: E1020 12:40:59.525916     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:41:08 old-k8s-version-384253 kubelet[721]: I1020 12:41:08.038454     721 scope.go:117] "RemoveContainer" containerID="619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9"
	Oct 20 12:41:11 old-k8s-version-384253 kubelet[721]: I1020 12:41:11.931473     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:41:12 old-k8s-version-384253 kubelet[721]: I1020 12:41:12.050848     721 scope.go:117] "RemoveContainer" containerID="cde504b4e75163a0523a610dbba1916872f688ef0b5585ca6bb81a7397b9d78c"
	Oct 20 12:41:12 old-k8s-version-384253 kubelet[721]: I1020 12:41:12.051220     721 scope.go:117] "RemoveContainer" containerID="11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	Oct 20 12:41:12 old-k8s-version-384253 kubelet[721]: E1020 12:41:12.051591     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:41:19 old-k8s-version-384253 kubelet[721]: I1020 12:41:19.525402     721 scope.go:117] "RemoveContainer" containerID="11d85e029478fe2b5a17c7243e1d8e958b4df8ceafbc0488837daf495da3ebfe"
	Oct 20 12:41:19 old-k8s-version-384253 kubelet[721]: E1020 12:41:19.525792     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f8g6l_kubernetes-dashboard(9a983538-7cec-4083-9feb-24536fead6c9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f8g6l" podUID="9a983538-7cec-4083-9feb-24536fead6c9"
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:41:24 old-k8s-version-384253 kubelet[721]: I1020 12:41:24.286022     721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:41:24 old-k8s-version-384253 systemd[1]: kubelet.service: Consumed 1.437s CPU time.
	
	
	==> kubernetes-dashboard [332105a576843e1a18f1ce5bc18761e73f98bfb38484b691cc02ba884d3d6026] <==
	2025/10/20 12:40:54 Starting overwatch
	2025/10/20 12:40:54 Using namespace: kubernetes-dashboard
	2025/10/20 12:40:54 Using in-cluster config to connect to apiserver
	2025/10/20 12:40:54 Using secret token for csrf signing
	2025/10/20 12:40:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:40:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:40:54 Successful initial request to the apiserver, version: v1.28.0
	2025/10/20 12:40:54 Generating JWE encryption key
	2025/10/20 12:40:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:40:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:40:55 Initializing JWE encryption key from synchronized object
	2025/10/20 12:40:55 Creating in-cluster Sidecar client
	2025/10/20 12:40:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:40:55 Serving insecurely on HTTP port: 9090
	2025/10/20 12:41:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [619011c2bcd4b73d4a9961d9b414cc1b7bf872d29079384e5981cc7d29203fa9] <==
	I1020 12:40:37.315527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:41:07.319194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bbb58682200169c5790db34c4f863aebe576c9d5fa7ac27ea2f0a8bbeaa434e8] <==
	I1020 12:41:08.113724       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:41:08.125118       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:41:08.125153       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 12:41:25.560677       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:41:25.560832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a192978-e7b4-438b-8996-16ddc24fec6e", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-384253_91134c2a-6abc-47e9-ad6d-a09b907ee79c became leader
	I1020 12:41:25.560902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-384253_91134c2a-6abc-47e9-ad6d-a09b907ee79c!
	I1020 12:41:25.661229       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-384253_91134c2a-6abc-47e9-ad6d-a09b907ee79c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384253 -n old-k8s-version-384253
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384253 -n old-k8s-version-384253: exit status 2 (322.496269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-384253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-649841 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-649841 --alsologtostderr -v=1: exit status 80 (1.790248777s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-649841 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:41:47.173842  255942 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:41:47.174221  255942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:47.174238  255942 out.go:374] Setting ErrFile to fd 2...
	I1020 12:41:47.174244  255942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:47.174634  255942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:41:47.175239  255942 out.go:368] Setting JSON to false
	I1020 12:41:47.175278  255942 mustload.go:65] Loading cluster: no-preload-649841
	I1020 12:41:47.175681  255942 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:47.176191  255942 cli_runner.go:164] Run: docker container inspect no-preload-649841 --format={{.State.Status}}
	I1020 12:41:47.196260  255942 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:41:47.196661  255942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:41:47.257451  255942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-20 12:41:47.247302378 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:41:47.258361  255942 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-649841 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 12:41:47.260426  255942 out.go:179] * Pausing node no-preload-649841 ... 
	I1020 12:41:47.261575  255942 host.go:66] Checking if "no-preload-649841" exists ...
	I1020 12:41:47.261865  255942 ssh_runner.go:195] Run: systemctl --version
	I1020 12:41:47.261903  255942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-649841
	I1020 12:41:47.280333  255942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/no-preload-649841/id_rsa Username:docker}
	I1020 12:41:47.381083  255942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:47.394721  255942 pause.go:52] kubelet running: true
	I1020 12:41:47.394821  255942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:47.579527  255942 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:47.579590  255942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:47.647560  255942 cri.go:89] found id: "f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034"
	I1020 12:41:47.647580  255942 cri.go:89] found id: "61fe223c6fe5bcb16adc3e355e55c3fbe804f30fc5ce435434798668a773ca35"
	I1020 12:41:47.647584  255942 cri.go:89] found id: "4b705515b3e6c7ede78b49b5e0fb2e2465d9214e74325acdefd45ec4d57b7057"
	I1020 12:41:47.647588  255942 cri.go:89] found id: "47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8"
	I1020 12:41:47.647591  255942 cri.go:89] found id: "b299c1600a1eb44936aedd6cde2e8365c9906379c50dd89eb8ad705c657a863d"
	I1020 12:41:47.647594  255942 cri.go:89] found id: "816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679"
	I1020 12:41:47.647597  255942 cri.go:89] found id: "49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0"
	I1020 12:41:47.647601  255942 cri.go:89] found id: "bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d"
	I1020 12:41:47.647604  255942 cri.go:89] found id: "28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1"
	I1020 12:41:47.647621  255942 cri.go:89] found id: "fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	I1020 12:41:47.647629  255942 cri.go:89] found id: "0550ddaaca138162356cb67e6b85432b155df954ed848975ffed2389b56fd043"
	I1020 12:41:47.647633  255942 cri.go:89] found id: ""
	I1020 12:41:47.647691  255942 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:47.659285  255942 retry.go:31] will retry after 268.510517ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:47Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:41:47.928863  255942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:47.941532  255942 pause.go:52] kubelet running: false
	I1020 12:41:47.941583  255942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:48.083678  255942 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:48.083765  255942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:48.153054  255942 cri.go:89] found id: "f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034"
	I1020 12:41:48.153081  255942 cri.go:89] found id: "61fe223c6fe5bcb16adc3e355e55c3fbe804f30fc5ce435434798668a773ca35"
	I1020 12:41:48.153087  255942 cri.go:89] found id: "4b705515b3e6c7ede78b49b5e0fb2e2465d9214e74325acdefd45ec4d57b7057"
	I1020 12:41:48.153092  255942 cri.go:89] found id: "47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8"
	I1020 12:41:48.153095  255942 cri.go:89] found id: "b299c1600a1eb44936aedd6cde2e8365c9906379c50dd89eb8ad705c657a863d"
	I1020 12:41:48.153099  255942 cri.go:89] found id: "816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679"
	I1020 12:41:48.153102  255942 cri.go:89] found id: "49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0"
	I1020 12:41:48.153106  255942 cri.go:89] found id: "bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d"
	I1020 12:41:48.153110  255942 cri.go:89] found id: "28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1"
	I1020 12:41:48.153117  255942 cri.go:89] found id: "fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	I1020 12:41:48.153121  255942 cri.go:89] found id: "0550ddaaca138162356cb67e6b85432b155df954ed848975ffed2389b56fd043"
	I1020 12:41:48.153125  255942 cri.go:89] found id: ""
	I1020 12:41:48.153174  255942 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:48.164988  255942 retry.go:31] will retry after 469.623061ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:48Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:41:48.635741  255942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:41:48.650699  255942 pause.go:52] kubelet running: false
	I1020 12:41:48.650763  255942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:41:48.817397  255942 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:41:48.817490  255942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:41:48.894864  255942 cri.go:89] found id: "f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034"
	I1020 12:41:48.894888  255942 cri.go:89] found id: "61fe223c6fe5bcb16adc3e355e55c3fbe804f30fc5ce435434798668a773ca35"
	I1020 12:41:48.894894  255942 cri.go:89] found id: "4b705515b3e6c7ede78b49b5e0fb2e2465d9214e74325acdefd45ec4d57b7057"
	I1020 12:41:48.894899  255942 cri.go:89] found id: "47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8"
	I1020 12:41:48.894903  255942 cri.go:89] found id: "b299c1600a1eb44936aedd6cde2e8365c9906379c50dd89eb8ad705c657a863d"
	I1020 12:41:48.894917  255942 cri.go:89] found id: "816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679"
	I1020 12:41:48.894921  255942 cri.go:89] found id: "49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0"
	I1020 12:41:48.894926  255942 cri.go:89] found id: "bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d"
	I1020 12:41:48.894930  255942 cri.go:89] found id: "28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1"
	I1020 12:41:48.894937  255942 cri.go:89] found id: "fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	I1020 12:41:48.894941  255942 cri.go:89] found id: "0550ddaaca138162356cb67e6b85432b155df954ed848975ffed2389b56fd043"
	I1020 12:41:48.894946  255942 cri.go:89] found id: ""
	I1020 12:41:48.894992  255942 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:41:48.909035  255942 out.go:203] 
	W1020 12:41:48.910402  255942 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:41:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:41:48.910425  255942 out.go:285] * 
	* 
	W1020 12:41:48.914752  255942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:41:48.916139  255942 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-649841 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-649841
helpers_test.go:243: (dbg) docker inspect no-preload-649841:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a",
	        "Created": "2025-10-20T12:39:34.746845301Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:40:49.95458922Z",
	            "FinishedAt": "2025-10-20T12:40:49.040094866Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/hosts",
	        "LogPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a-json.log",
	        "Name": "/no-preload-649841",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-649841:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-649841",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a",
	                "LowerDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-649841",
	                "Source": "/var/lib/docker/volumes/no-preload-649841/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-649841",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-649841",
	                "name.minikube.sigs.k8s.io": "no-preload-649841",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0163ed58e91cc363b014c9e64b219fd6b9081774ea1d7cefde489f36afdd44e6",
	            "SandboxKey": "/var/run/docker/netns/0163ed58e91c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-649841": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:05:c9:d8:d8:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6720b99a1b6d91a202341926290513ef2c609bf0485dc9d73b76615c6b693c13",
	                    "EndpointID": "4ca837274b57372c9d685a025f52e1a02e0935ec30fb9143ee19619338fdc860",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-649841",
	                        "3ebdc406ea00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841: exit status 2 (339.843129ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-649841 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-649841 logs -n 25: (1.276452575s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ force-systemd-flag-670413 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-670413    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p force-systemd-flag-670413                                                                                                                                                                                                                  │ force-systemd-flag-670413    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ cert-options-418869 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-418869          │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ -p cert-options-418869 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-418869          │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p cert-options-418869                                                                                                                                                                                                                        │ cert-options-418869          │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:40 UTC │
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:41:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:41:33.149481  252906 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:41:33.149783  252906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:33.149793  252906 out.go:374] Setting ErrFile to fd 2...
	I1020 12:41:33.149798  252906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:33.150032  252906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:41:33.150595  252906 out.go:368] Setting JSON to false
	I1020 12:41:33.151924  252906 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5042,"bootTime":1760959051,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:41:33.152044  252906 start.go:141] virtualization: kvm guest
	I1020 12:41:33.154542  252906 out.go:179] * [default-k8s-diff-port-874012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:41:33.156078  252906 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:41:33.156074  252906 notify.go:220] Checking for updates...
	I1020 12:41:33.157638  252906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:41:33.159126  252906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:41:33.160329  252906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:41:33.161693  252906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:41:33.163016  252906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:41:33.164900  252906 config.go:182] Loaded profile config "cert-expiration-365628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:33.165003  252906 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:33.165091  252906 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:33.165180  252906 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:41:33.189720  252906 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:41:33.189819  252906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:41:33.252158  252906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:41:33.240526552 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:41:33.252276  252906 docker.go:318] overlay module found
	I1020 12:41:33.254285  252906 out.go:179] * Using the docker driver based on user configuration
	I1020 12:41:33.255872  252906 start.go:305] selected driver: docker
	I1020 12:41:33.255890  252906 start.go:925] validating driver "docker" against <nil>
	I1020 12:41:33.255905  252906 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:41:33.256448  252906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:41:33.314053  252906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:41:33.303046441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:41:33.314236  252906 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:41:33.314456  252906 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:41:33.316237  252906 out.go:179] * Using Docker driver with root privileges
	I1020 12:41:33.317393  252906 cni.go:84] Creating CNI manager for ""
	I1020 12:41:33.317469  252906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:41:33.317481  252906 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:41:33.317556  252906 start.go:349] cluster config:
	{Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:41:33.318953  252906 out.go:179] * Starting "default-k8s-diff-port-874012" primary control-plane node in "default-k8s-diff-port-874012" cluster
	I1020 12:41:33.320223  252906 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:41:33.321626  252906 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:41:33.322749  252906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:41:33.322809  252906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:41:33.322832  252906 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:41:33.322847  252906 cache.go:58] Caching tarball of preloaded images
	I1020 12:41:33.322971  252906 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:41:33.322981  252906 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:41:33.323077  252906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json ...
	I1020 12:41:33.323100  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json: {Name:mkbaf95fe95383d81bbdcce007e08d73cbbc5331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:33.344046  252906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:41:33.344069  252906 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:41:33.344102  252906 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:41:33.344130  252906 start.go:360] acquireMachinesLock for default-k8s-diff-port-874012: {Name:mk3fe7fe7ce0d8961f5f623b6e43bccc5068bc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:41:33.344237  252906 start.go:364] duration metric: took 87.067µs to acquireMachinesLock for "default-k8s-diff-port-874012"
	I1020 12:41:33.344266  252906 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:41:33.344345  252906 start.go:125] createHost starting for "" (driver="docker")
	W1020 12:41:31.511940  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:34.010841  246403 pod_ready.go:94] pod "coredns-66bc5c9577-7d88p" is "Ready"
	I1020 12:41:34.010866  246403 pod_ready.go:86] duration metric: took 33.005722872s for pod "coredns-66bc5c9577-7d88p" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.013315  246403 pod_ready.go:83] waiting for pod "etcd-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.017174  246403 pod_ready.go:94] pod "etcd-no-preload-649841" is "Ready"
	I1020 12:41:34.017195  246403 pod_ready.go:86] duration metric: took 3.859597ms for pod "etcd-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.019131  246403 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.022716  246403 pod_ready.go:94] pod "kube-apiserver-no-preload-649841" is "Ready"
	I1020 12:41:34.022737  246403 pod_ready.go:86] duration metric: took 3.586444ms for pod "kube-apiserver-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.024529  246403 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.209736  246403 pod_ready.go:94] pod "kube-controller-manager-no-preload-649841" is "Ready"
	I1020 12:41:34.209762  246403 pod_ready.go:86] duration metric: took 185.214305ms for pod "kube-controller-manager-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.408994  246403 pod_ready.go:83] waiting for pod "kube-proxy-6vpwz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.809293  246403 pod_ready.go:94] pod "kube-proxy-6vpwz" is "Ready"
	I1020 12:41:34.809322  246403 pod_ready.go:86] duration metric: took 400.303842ms for pod "kube-proxy-6vpwz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:35.009721  246403 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:35.409564  246403 pod_ready.go:94] pod "kube-scheduler-no-preload-649841" is "Ready"
	I1020 12:41:35.409594  246403 pod_ready.go:86] duration metric: took 399.84125ms for pod "kube-scheduler-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:35.409608  246403 pod_ready.go:40] duration metric: took 34.40803163s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:41:35.457296  246403 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:41:35.460364  246403 out.go:179] * Done! kubectl is now configured to use "no-preload-649841" cluster and "default" namespace by default
	I1020 12:41:31.641472  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:31.641935  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:31.641986  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:31.642050  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:31.670466  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:31.670488  236655 cri.go:89] found id: ""
	I1020 12:41:31.670496  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:31.670544  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:31.674547  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:31.674609  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:31.702395  236655 cri.go:89] found id: ""
	I1020 12:41:31.702419  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.702429  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:31.702435  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:31.702496  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:31.730192  236655 cri.go:89] found id: ""
	I1020 12:41:31.730219  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.730228  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:31.730234  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:31.730289  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:31.760024  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:31.760046  236655 cri.go:89] found id: ""
	I1020 12:41:31.760056  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:31.760122  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:31.764226  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:31.764294  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:31.811664  236655 cri.go:89] found id: ""
	I1020 12:41:31.811691  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.811700  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:31.811705  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:31.811780  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:31.846253  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:31.846281  236655 cri.go:89] found id: ""
	I1020 12:41:31.846292  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:31.846379  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:31.850833  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:31.850934  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:31.879925  236655 cri.go:89] found id: ""
	I1020 12:41:31.879948  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.879959  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:31.879965  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:31.880023  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:31.909123  236655 cri.go:89] found id: ""
	I1020 12:41:31.909154  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.909166  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:31.909177  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:31.909191  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:31.924661  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:31.924688  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:31.986833  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:31.986857  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:31.986868  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:32.023276  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:32.023307  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:32.073966  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:32.074000  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:32.101452  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:32.101481  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:32.155708  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:32.155747  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:32.187309  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:32.187331  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:34.764842  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:34.765235  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:34.765283  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:34.765348  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:34.795171  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:34.795195  236655 cri.go:89] found id: ""
	I1020 12:41:34.795204  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:34.795266  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:34.799282  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:34.799356  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:34.829250  236655 cri.go:89] found id: ""
	I1020 12:41:34.829279  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.829310  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:34.829318  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:34.829369  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:34.857659  236655 cri.go:89] found id: ""
	I1020 12:41:34.857688  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.857700  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:34.857707  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:34.857797  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:34.886515  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:34.886538  236655 cri.go:89] found id: ""
	I1020 12:41:34.886550  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:34.886617  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:34.891087  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:34.891169  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:34.920955  236655 cri.go:89] found id: ""
	I1020 12:41:34.920986  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.920997  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:34.921005  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:34.921073  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:34.949538  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:34.949555  236655 cri.go:89] found id: ""
	I1020 12:41:34.949564  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:34.949624  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:34.953690  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:34.953767  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:34.982184  236655 cri.go:89] found id: ""
	I1020 12:41:34.982215  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.982226  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:34.982234  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:34.982296  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:35.012910  236655 cri.go:89] found id: ""
	I1020 12:41:35.012933  236655 logs.go:282] 0 containers: []
	W1020 12:41:35.012943  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:35.012954  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:35.012969  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:35.029874  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:35.029909  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:35.091944  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:35.091962  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:35.091973  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:35.133388  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:35.133440  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:35.183700  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:35.183737  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:35.214834  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:35.214867  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:35.270009  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:35.270045  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:35.306207  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:35.306244  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:33.346299  252906 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:41:33.346514  252906 start.go:159] libmachine.API.Create for "default-k8s-diff-port-874012" (driver="docker")
	I1020 12:41:33.346544  252906 client.go:168] LocalClient.Create starting
	I1020 12:41:33.346600  252906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:41:33.346629  252906 main.go:141] libmachine: Decoding PEM data...
	I1020 12:41:33.346646  252906 main.go:141] libmachine: Parsing certificate...
	I1020 12:41:33.346711  252906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:41:33.346733  252906 main.go:141] libmachine: Decoding PEM data...
	I1020 12:41:33.346741  252906 main.go:141] libmachine: Parsing certificate...
	I1020 12:41:33.347123  252906 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:41:33.365615  252906 cli_runner.go:211] docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:41:33.365683  252906 network_create.go:284] running [docker network inspect default-k8s-diff-port-874012] to gather additional debugging logs...
	I1020 12:41:33.365701  252906 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012
	W1020 12:41:33.384046  252906 cli_runner.go:211] docker network inspect default-k8s-diff-port-874012 returned with exit code 1
	I1020 12:41:33.384079  252906 network_create.go:287] error running [docker network inspect default-k8s-diff-port-874012]: docker network inspect default-k8s-diff-port-874012: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-874012 not found
	I1020 12:41:33.384111  252906 network_create.go:289] output of [docker network inspect default-k8s-diff-port-874012]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-874012 not found
	
	** /stderr **
	I1020 12:41:33.384200  252906 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:41:33.403248  252906 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:41:33.404125  252906 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:41:33.404833  252906 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:41:33.405154  252906 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:41:33.405727  252906 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6720b99a1b6d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6e:e8:d3:69:12:f1} reservation:<nil>}
	I1020 12:41:33.406293  252906 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-4b75e071d2ef IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:a6:2a:37:02:57:60} reservation:<nil>}
	I1020 12:41:33.407467  252906 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e97f50}
	I1020 12:41:33.407495  252906 network_create.go:124] attempt to create docker network default-k8s-diff-port-874012 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1020 12:41:33.407560  252906 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 default-k8s-diff-port-874012
	I1020 12:41:33.470088  252906 network_create.go:108] docker network default-k8s-diff-port-874012 192.168.103.0/24 created
	I1020 12:41:33.470121  252906 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-874012" container
	I1020 12:41:33.470193  252906 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:41:33.489214  252906 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-874012 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:41:33.509191  252906 oci.go:103] Successfully created a docker volume default-k8s-diff-port-874012
	I1020 12:41:33.509278  252906 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-874012-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --entrypoint /usr/bin/test -v default-k8s-diff-port-874012:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:41:33.907116  252906 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-874012
	I1020 12:41:33.907150  252906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:41:33.907170  252906 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:41:33.907227  252906 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-874012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:41:37.893670  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:37.894143  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:37.894195  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:37.894245  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:37.924142  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:37.924171  236655 cri.go:89] found id: ""
	I1020 12:41:37.924181  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:37.924240  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:37.928284  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:37.928346  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:37.957570  236655 cri.go:89] found id: ""
	I1020 12:41:37.957596  236655 logs.go:282] 0 containers: []
	W1020 12:41:37.957607  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:37.957614  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:37.957675  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:37.987138  236655 cri.go:89] found id: ""
	I1020 12:41:37.987160  236655 logs.go:282] 0 containers: []
	W1020 12:41:37.987169  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:37.987177  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:37.987244  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:38.015383  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:38.015408  236655 cri.go:89] found id: ""
	I1020 12:41:38.015418  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:38.015484  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:38.020299  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:38.020384  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:38.049441  236655 cri.go:89] found id: ""
	I1020 12:41:38.049465  236655 logs.go:282] 0 containers: []
	W1020 12:41:38.049472  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:38.049477  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:38.049527  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:38.078251  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:38.078276  236655 cri.go:89] found id: ""
	I1020 12:41:38.078287  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:38.078349  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:38.082472  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:38.082532  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:38.111176  236655 cri.go:89] found id: ""
	I1020 12:41:38.111202  236655 logs.go:282] 0 containers: []
	W1020 12:41:38.111213  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:38.111226  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:38.111281  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:38.139967  236655 cri.go:89] found id: ""
	I1020 12:41:38.139996  236655 logs.go:282] 0 containers: []
	W1020 12:41:38.140004  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:38.140015  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:38.140028  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:38.172044  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:38.172079  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:38.244222  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:38.244260  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:38.259049  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:38.259078  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:38.318419  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:38.318439  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:38.318452  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:38.353857  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:38.353888  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:38.398589  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:38.398628  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:38.428996  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:38.429024  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:40.978845  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:40.979326  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:40.979375  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:40.979466  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:41.015302  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:41.015328  236655 cri.go:89] found id: ""
	I1020 12:41:41.015335  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:41.015383  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.019689  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:41.019789  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:41.047218  236655 cri.go:89] found id: ""
	I1020 12:41:41.047240  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.047250  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:41.047256  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:41.047319  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:41.077150  236655 cri.go:89] found id: ""
	I1020 12:41:41.077174  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.077181  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:41.077188  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:41.077239  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:41.106848  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:41.106867  236655 cri.go:89] found id: ""
	I1020 12:41:41.106874  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:41.106931  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.112104  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:41.112175  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:38.448910  252906 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-874012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.541633419s)
	I1020 12:41:38.448941  252906 kic.go:203] duration metric: took 4.541766758s to extract preloaded images to volume ...
	W1020 12:41:38.449028  252906 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:41:38.449065  252906 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:41:38.449114  252906 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:41:38.508433  252906 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-874012 --name default-k8s-diff-port-874012 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --network default-k8s-diff-port-874012 --ip 192.168.103.2 --volume default-k8s-diff-port-874012:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:41:38.788829  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Running}}
	I1020 12:41:38.806661  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:41:38.825433  252906 cli_runner.go:164] Run: docker exec default-k8s-diff-port-874012 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:41:38.867316  252906 oci.go:144] the created container "default-k8s-diff-port-874012" has a running status.
	I1020 12:41:38.867359  252906 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa...
	I1020 12:41:39.064103  252906 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:41:39.097977  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:41:39.121314  252906 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:41:39.121347  252906 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-874012 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:41:39.170120  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:41:39.189740  252906 machine.go:93] provisionDockerMachine start ...
	I1020 12:41:39.189873  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.213690  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.214067  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.214090  252906 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:41:39.358718  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-874012
	
	I1020 12:41:39.358746  252906 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-874012"
	I1020 12:41:39.358826  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.378506  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.378840  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.378869  252906 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-874012 && echo "default-k8s-diff-port-874012" | sudo tee /etc/hostname
	I1020 12:41:39.533271  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-874012
	
	I1020 12:41:39.533361  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.552494  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.552736  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.552764  252906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-874012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-874012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-874012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:41:39.693762  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:41:39.693825  252906 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:41:39.693853  252906 ubuntu.go:190] setting up certificates
	I1020 12:41:39.693865  252906 provision.go:84] configureAuth start
	I1020 12:41:39.693928  252906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:41:39.711350  252906 provision.go:143] copyHostCerts
	I1020 12:41:39.711427  252906 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:41:39.711444  252906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:41:39.711519  252906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:41:39.711635  252906 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:41:39.711649  252906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:41:39.711690  252906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:41:39.711808  252906 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:41:39.711820  252906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:41:39.711860  252906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:41:39.711945  252906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-874012 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-874012 localhost minikube]
	I1020 12:41:39.773764  252906 provision.go:177] copyRemoteCerts
	I1020 12:41:39.773843  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:41:39.773896  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.793530  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:39.894356  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:41:39.915345  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:41:39.934083  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1020 12:41:39.952766  252906 provision.go:87] duration metric: took 258.888784ms to configureAuth
	I1020 12:41:39.952809  252906 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:41:39.952958  252906 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:39.953073  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.971448  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.971719  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.971739  252906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:41:40.227824  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:41:40.227856  252906 machine.go:96] duration metric: took 1.03808794s to provisionDockerMachine
	I1020 12:41:40.227868  252906 client.go:171] duration metric: took 6.881317923s to LocalClient.Create
	I1020 12:41:40.227890  252906 start.go:167] duration metric: took 6.881374822s to libmachine.API.Create "default-k8s-diff-port-874012"
	I1020 12:41:40.227900  252906 start.go:293] postStartSetup for "default-k8s-diff-port-874012" (driver="docker")
	I1020 12:41:40.227915  252906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:41:40.227971  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:41:40.228005  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.247306  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.351204  252906 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:41:40.355388  252906 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:41:40.355413  252906 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:41:40.355425  252906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:41:40.355484  252906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:41:40.355558  252906 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:41:40.355659  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:41:40.363884  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:41:40.385115  252906 start.go:296] duration metric: took 157.197959ms for postStartSetup
	I1020 12:41:40.385489  252906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:41:40.403999  252906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json ...
	I1020 12:41:40.404375  252906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:41:40.404427  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.425251  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.527362  252906 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:41:40.532184  252906 start.go:128] duration metric: took 7.187825267s to createHost
	I1020 12:41:40.532210  252906 start.go:83] releasing machines lock for "default-k8s-diff-port-874012", held for 7.187959407s
	I1020 12:41:40.532272  252906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:41:40.550973  252906 ssh_runner.go:195] Run: cat /version.json
	I1020 12:41:40.551030  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.551109  252906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:41:40.551179  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.570419  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.570910  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.720672  252906 ssh_runner.go:195] Run: systemctl --version
	I1020 12:41:40.727600  252906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:41:40.763207  252906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:41:40.768481  252906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:41:40.768548  252906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:41:40.797033  252906 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:41:40.797055  252906 start.go:495] detecting cgroup driver to use...
	I1020 12:41:40.797083  252906 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:41:40.797128  252906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:41:40.814526  252906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:41:40.828227  252906 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:41:40.828304  252906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:41:40.846128  252906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:41:40.864527  252906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:41:40.947458  252906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:41:41.046598  252906 docker.go:234] disabling docker service ...
	I1020 12:41:41.046684  252906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:41:41.068592  252906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:41:41.082879  252906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:41:41.176160  252906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:41:41.275909  252906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:41:41.288953  252906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:41:41.305202  252906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:41:41.305253  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.316186  252906 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:41:41.316246  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.325374  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.334731  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.343901  252906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:41:41.352457  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.361612  252906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.376034  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.384726  252906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:41:41.392483  252906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:41:41.400081  252906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:41:41.487460  252906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:41:41.596147  252906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:41:41.596214  252906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:41:41.600141  252906 start.go:563] Will wait 60s for crictl version
	I1020 12:41:41.600202  252906 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.603530  252906 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:41:41.627656  252906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:41:41.627744  252906 ssh_runner.go:195] Run: crio --version
	I1020 12:41:41.655169  252906 ssh_runner.go:195] Run: crio --version
	I1020 12:41:41.684747  252906 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:41:41.686287  252906 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:41:41.703863  252906 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1020 12:41:41.707871  252906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:41:41.718108  252906 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:41:41.718233  252906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:41:41.718284  252906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:41:41.750555  252906 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:41:41.750590  252906 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:41:41.750643  252906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:41:41.775520  252906 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:41:41.775543  252906 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:41:41.775550  252906 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1020 12:41:41.775629  252906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-874012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:41:41.775706  252906 ssh_runner.go:195] Run: crio config
	I1020 12:41:41.823332  252906 cni.go:84] Creating CNI manager for ""
	I1020 12:41:41.823358  252906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:41:41.823379  252906 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:41:41.823411  252906 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-874012 NodeName:default-k8s-diff-port-874012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:41:41.823560  252906 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-874012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:41:41.823619  252906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:41:41.834290  252906 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:41:41.834359  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:41:41.842658  252906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1020 12:41:41.856438  252906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:41:41.873412  252906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1020 12:41:41.886730  252906 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:41:41.890592  252906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:41:41.900719  252906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:41:41.982195  252906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:41:42.009414  252906 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012 for IP: 192.168.103.2
	I1020 12:41:42.009436  252906 certs.go:195] generating shared ca certs ...
	I1020 12:41:42.009452  252906 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.009606  252906 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:41:42.009672  252906 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:41:42.009687  252906 certs.go:257] generating profile certs ...
	I1020 12:41:42.009757  252906 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key
	I1020 12:41:42.009795  252906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.crt with IP's: []
	I1020 12:41:42.127946  252906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.crt ...
	I1020 12:41:42.127974  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.crt: {Name:mk38e41c5d5d89138fd1da3f4f42e460c3181c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.128193  252906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key ...
	I1020 12:41:42.128213  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key: {Name:mkedf9a96f34a9715127b774381ab8ca235193aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.128336  252906 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681
	I1020 12:41:42.128365  252906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1020 12:41:42.265179  252906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681 ...
	I1020 12:41:42.265209  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681: {Name:mk9079ed6aaac93802c324fb6801c56265d3df6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.265411  252906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681 ...
	I1020 12:41:42.265433  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681: {Name:mk40008c3c43218f8c68d7a345c739cd23609329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.265542  252906 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt
	I1020 12:41:42.265642  252906 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key
	I1020 12:41:42.265722  252906 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key
	I1020 12:41:42.265744  252906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt with IP's: []
	I1020 12:41:42.293541  252906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt ...
	I1020 12:41:42.293577  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt: {Name:mka62b7ef50f8343c9070bcfedbcc5d571031780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.293815  252906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key ...
	I1020 12:41:42.293838  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key: {Name:mk478b3ae49d16722839adbcd74f5bc870eeccc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.294038  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:41:42.294072  252906 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:41:42.294082  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:41:42.294104  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:41:42.294128  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:41:42.294151  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:41:42.294189  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:41:42.294726  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:41:42.313886  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:41:42.332363  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:41:42.351032  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:41:42.369764  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1020 12:41:42.388015  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:41:42.406621  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:41:42.426144  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:41:42.444399  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:41:42.465232  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:41:42.483960  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:41:42.503413  252906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:41:42.517686  252906 ssh_runner.go:195] Run: openssl version
	I1020 12:41:42.524277  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:41:42.533473  252906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:41:42.537684  252906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:41:42.537740  252906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:41:42.572620  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:41:42.582110  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:41:42.591313  252906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:41:42.595418  252906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:41:42.595476  252906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:41:42.630020  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:41:42.639951  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:41:42.649277  252906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:41:42.653262  252906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:41:42.653326  252906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:41:42.688218  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:41:42.697796  252906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:41:42.701587  252906 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:41:42.701642  252906 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:41:42.701702  252906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:41:42.701749  252906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:41:42.729005  252906 cri.go:89] found id: ""
	I1020 12:41:42.729086  252906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:41:42.737616  252906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:41:42.745921  252906 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:41:42.745972  252906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:41:42.753709  252906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:41:42.753724  252906 kubeadm.go:157] found existing configuration files:
	
	I1020 12:41:42.753786  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1020 12:41:42.761880  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:41:42.761943  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:41:42.769766  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1020 12:41:42.777911  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:41:42.777965  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:41:42.786313  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1020 12:41:42.794245  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:41:42.794311  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:41:42.801979  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1020 12:41:42.809889  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:41:42.809942  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:41:42.817648  252906 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:41:42.888159  252906 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:41:42.949155  252906 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:41:41.143866  236655 cri.go:89] found id: ""
	I1020 12:41:41.143918  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.143927  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:41.143932  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:41.144014  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:41.172645  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:41.172667  236655 cri.go:89] found id: ""
	I1020 12:41:41.172675  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:41.172731  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.177284  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:41.177354  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:41.207766  236655 cri.go:89] found id: ""
	I1020 12:41:41.207822  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.207834  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:41.207842  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:41.208022  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:41.243633  236655 cri.go:89] found id: ""
	I1020 12:41:41.243664  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.243675  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:41.243686  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:41.243701  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:41.278650  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:41.278675  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:41.356683  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:41.356709  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:41.372645  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:41.372682  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:41.438336  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:41.438360  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:41.438375  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:41.470723  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:41.470757  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:41.519042  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:41.519073  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:41.546571  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:41.546596  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:44.090908  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:44.091367  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:44.091426  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:44.091482  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:44.120355  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:44.120387  236655 cri.go:89] found id: ""
	I1020 12:41:44.120397  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:44.120458  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:44.124692  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:44.124766  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:44.153937  236655 cri.go:89] found id: ""
	I1020 12:41:44.153968  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.153979  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:44.153986  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:44.154044  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:44.183327  236655 cri.go:89] found id: ""
	I1020 12:41:44.183356  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.183367  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:44.183375  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:44.183455  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:44.212833  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:44.212857  236655 cri.go:89] found id: ""
	I1020 12:41:44.212865  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:44.212919  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:44.217029  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:44.217106  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:44.246745  236655 cri.go:89] found id: ""
	I1020 12:41:44.246792  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.246802  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:44.246809  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:44.246869  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:44.274686  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:44.274707  236655 cri.go:89] found id: ""
	I1020 12:41:44.274716  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:44.274795  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:44.279076  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:44.279151  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:44.307073  236655 cri.go:89] found id: ""
	I1020 12:41:44.307108  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.307118  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:44.307124  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:44.307187  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:44.334948  236655 cri.go:89] found id: ""
	I1020 12:41:44.334975  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.334982  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:44.334991  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:44.335003  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:44.369227  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:44.369259  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:44.450360  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:44.450396  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:44.465876  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:44.465906  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:44.527399  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:44.527419  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:44.527431  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:44.565085  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:44.565132  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:44.613481  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:44.613519  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:44.641223  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:44.641252  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 20 12:41:10 no-preload-649841 crio[562]: time="2025-10-20T12:41:10.86130982Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 12:41:10 no-preload-649841 crio[562]: time="2025-10-20T12:41:10.864750323Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 12:41:10 no-preload-649841 crio[562]: time="2025-10-20T12:41:10.864799263Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.024529935Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52d91a99-a52c-483e-8fe9-e36cceb54603 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.02734188Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=681edce1-807f-41cc-922b-c3edc06c818d name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.030339029Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper" id=6503d84b-d380-440a-9d90-8729d11dadf5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.03048381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.03744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.037974373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.066615373Z" level=info msg="Created container fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper" id=6503d84b-d380-440a-9d90-8729d11dadf5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.067285456Z" level=info msg="Starting container: fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614" id=1d2b7e7f-8d58-40c6-b532-07cffbd957da name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.069204084Z" level=info msg="Started container" PID=1760 containerID=fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper id=1d2b7e7f-8d58-40c6-b532-07cffbd957da name=/runtime.v1.RuntimeService/StartContainer sandboxID=2800f6ce54b66817f3594b83ff7b336311a2b16931c0c2c57a623d1bf7c03b90
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.12109394Z" level=info msg="Removing container: 1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d" id=8244e132-79ab-4e71-be02-d62383cdfae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.130591963Z" level=info msg="Removed container 1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper" id=8244e132-79ab-4e71-be02-d62383cdfae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.136920486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=935462bc-4ef8-42b5-b9f3-afbe15efe0ac name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.137801885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=089fef40-6dac-47c2-97e4-6ea6899a902a name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.13883867Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ef87b44a-3d11-4ade-b5d4-f365dca98956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.138957252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143130534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143260166Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0997648d10bbd82f0b8e05382d7efac89ed6797f0ff2b8baed2f9aeff4287a16/merged/etc/passwd: no such file or directory"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143282405Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0997648d10bbd82f0b8e05382d7efac89ed6797f0ff2b8baed2f9aeff4287a16/merged/etc/group: no such file or directory"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143477328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.166919409Z" level=info msg="Created container f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034: kube-system/storage-provisioner/storage-provisioner" id=ef87b44a-3d11-4ade-b5d4-f365dca98956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.16748807Z" level=info msg="Starting container: f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034" id=059adba1-3470-4213-adc0-b15237367c09 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.169621374Z" level=info msg="Started container" PID=1775 containerID=f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034 description=kube-system/storage-provisioner/storage-provisioner id=059adba1-3470-4213-adc0-b15237367c09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d48793b3885007d0ad15bc7c21101e1839f4b6c53d9fc00b4af4b04c44513bcc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f430bf4944f7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   d48793b388500       storage-provisioner                          kube-system
	fa3a1311ff92d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   2800f6ce54b66       dashboard-metrics-scraper-6ffb444bf9-kkwfk   kubernetes-dashboard
	0550ddaaca138       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   581d07710cacb       kubernetes-dashboard-855c9754f9-48d7f        kubernetes-dashboard
	1c175f02085b5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   5bafa486ce7c7       busybox                                      default
	61fe223c6fe5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   45c89a78a6317       coredns-66bc5c9577-7d88p                     kube-system
	4b705515b3e6c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   c7d08cbce3510       kube-proxy-6vpwz                             kube-system
	47543b902bb8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   d48793b388500       storage-provisioner                          kube-system
	b299c1600a1eb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   60696fb7f9389       kindnet-ghtcz                                kube-system
	816d9c037942c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   d1d423cb588d4       kube-controller-manager-no-preload-649841    kube-system
	49212f5520e23       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   c959d95137748       etcd-no-preload-649841                       kube-system
	bf13bdfc60d3a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   7f1fe04ff6946       kube-scheduler-no-preload-649841             kube-system
	28717124ea3c3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   0a35cf2f48043       kube-apiserver-no-preload-649841             kube-system
	
	
	==> coredns [61fe223c6fe5bcb16adc3e355e55c3fbe804f30fc5ce435434798668a773ca35] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41740 - 49910 "HINFO IN 744305241757770200.8402828542348679612. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05388471s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-649841
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-649841
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=no-preload-649841
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_40_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:39:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-649841
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:41:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:40:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-649841
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                433a6564-548d-4f1d-8a4a-223c020110ee
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-7d88p                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-649841                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-ghtcz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-no-preload-649841              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-649841     200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-6vpwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-no-preload-649841              100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kkwfk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-48d7f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node no-preload-649841 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node no-preload-649841 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node no-preload-649841 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node no-preload-649841 event: Registered Node no-preload-649841 in Controller
	  Normal  NodeReady                90s                kubelet          Node no-preload-649841 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node no-preload-649841 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node no-preload-649841 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node no-preload-649841 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-649841 event: Registered Node no-preload-649841 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0] <==
	{"level":"warn","ts":"2025-10-20T12:40:58.859498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.865469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.872588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.878957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.885102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.891835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.900293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.907233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.913699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.921540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.927612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.933928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.941072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.947735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.954011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.960549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.967238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.973952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.979857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.993014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.999989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:59.006373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:59.055351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:14.195663Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.587305ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502384797684 > lease_revoke:<id:06ed9a01a231cf6b>","response":"size:28"}
	{"level":"info","ts":"2025-10-20T12:41:14.195957Z","caller":"traceutil/trace.go:172","msg":"trace[1120950342] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"104.826728ms","start":"2025-10-20T12:41:14.091115Z","end":"2025-10-20T12:41:14.195942Z","steps":["trace[1120950342] 'process raft request'  (duration: 104.700836ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:41:50 up  1:24,  0 user,  load average: 2.30, 3.19, 2.05
	Linux no-preload-649841 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b299c1600a1eb44936aedd6cde2e8365c9906379c50dd89eb8ad705c657a863d] <==
	I1020 12:41:00.641788       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:41:00.642066       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:41:00.642258       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:41:00.642281       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:41:00.642308       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:41:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:41:00.843010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:41:00.843658       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:41:00.843693       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:41:00.843860       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:41:01.338102       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:41:01.338135       1 metrics.go:72] Registering metrics
	I1020 12:41:01.338209       1 controller.go:711] "Syncing nftables rules"
	I1020 12:41:10.842844       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:10.842894       1 main.go:301] handling current node
	I1020 12:41:20.847724       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:20.847757       1 main.go:301] handling current node
	I1020 12:41:30.843354       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:30.843395       1 main.go:301] handling current node
	I1020 12:41:40.843001       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:40.843037       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1] <==
	I1020 12:40:59.513256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:40:59.513267       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:40:59.513145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:40:59.513294       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:40:59.513202       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 12:40:59.519489       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:40:59.521872       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:40:59.522449       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:40:59.522026       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:40:59.521983       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:40:59.531792       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:40:59.540551       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 12:40:59.540697       1 policy_source.go:240] refreshing policies
	I1020 12:40:59.581900       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:40:59.777534       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:40:59.805521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:40:59.824037       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:40:59.834260       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:40:59.840873       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:40:59.876379       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.203.217"}
	I1020 12:40:59.885974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.230.183"}
	I1020 12:41:00.416330       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:41:02.840000       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:41:03.288863       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:41:03.338119       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679] <==
	I1020 12:41:02.813736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:41:02.816074       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:41:02.817285       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:41:02.819549       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 12:41:02.820802       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:41:02.823037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 12:41:02.835498       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:41:02.835510       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:41:02.835541       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:41:02.835552       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:41:02.835585       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 12:41:02.835614       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:41:02.835633       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 12:41:02.835655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:41:02.835685       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:41:02.835699       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:41:02.839174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:41:02.840316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:41:02.840337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:41:02.840450       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:41:02.841511       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:41:02.841565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:41:02.841614       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:41:02.845575       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:41:02.860850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4b705515b3e6c7ede78b49b5e0fb2e2465d9214e74325acdefd45ec4d57b7057] <==
	I1020 12:41:00.427080       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:41:00.482052       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:41:00.582534       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:41:00.582573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:41:00.582666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:41:00.603753       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:41:00.603827       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:41:00.608751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:41:00.609591       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:41:00.609661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:41:00.611829       1 config.go:200] "Starting service config controller"
	I1020 12:41:00.611899       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:41:00.611952       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:41:00.611990       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:41:00.612023       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:41:00.612047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:41:00.612269       1 config.go:309] "Starting node config controller"
	I1020 12:41:00.612311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:41:00.612337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:41:00.712029       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:41:00.712103       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:41:00.712117       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d] <==
	I1020 12:40:59.479766       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:40:59.479923       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:59.484921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:40:59.484963       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:40:59.485990       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:40:59.486098       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1020 12:40:59.489587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:40:59.490165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:40:59.490239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:40:59.495527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:40:59.495725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:40:59.495743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:40:59.495912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:40:59.496512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:40:59.496707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:40:59.496896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:40:59.497138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:40:59.497151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:40:59.497347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:40:59.497907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:40:59.497955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:40:59.499598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:40:59.501007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:40:59.501385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1020 12:40:59.586733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:41:03 no-preload-649841 kubelet[713]: I1020 12:41:03.459659     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fec5bad0-dbb2-4040-ada9-4839502e4521-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-48d7f\" (UID: \"fec5bad0-dbb2-4040-ada9-4839502e4521\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48d7f"
	Oct 20 12:41:03 no-preload-649841 kubelet[713]: I1020 12:41:03.459717     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d150dca-8ac0-456c-b923-a90e607f3abd-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kkwfk\" (UID: \"8d150dca-8ac0-456c-b923-a90e607f3abd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk"
	Oct 20 12:41:03 no-preload-649841 kubelet[713]: I1020 12:41:03.832270     713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 12:41:06 no-preload-649841 kubelet[713]: I1020 12:41:06.069469     713 scope.go:117] "RemoveContainer" containerID="944de41e4536bb93c3acac45577e11cf7e79a6dad80d1e6d2c12d0b2a1a053c5"
	Oct 20 12:41:07 no-preload-649841 kubelet[713]: I1020 12:41:07.073507     713 scope.go:117] "RemoveContainer" containerID="944de41e4536bb93c3acac45577e11cf7e79a6dad80d1e6d2c12d0b2a1a053c5"
	Oct 20 12:41:07 no-preload-649841 kubelet[713]: I1020 12:41:07.073707     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:07 no-preload-649841 kubelet[713]: E1020 12:41:07.073897     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:08 no-preload-649841 kubelet[713]: I1020 12:41:08.078651     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:08 no-preload-649841 kubelet[713]: E1020 12:41:08.078870     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:09 no-preload-649841 kubelet[713]: I1020 12:41:09.092321     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48d7f" podStartSLOduration=1.108050483 podStartE2EDuration="6.092301235s" podCreationTimestamp="2025-10-20 12:41:03 +0000 UTC" firstStartedPulling="2025-10-20 12:41:03.741416088 +0000 UTC m=+6.808406424" lastFinishedPulling="2025-10-20 12:41:08.725666842 +0000 UTC m=+11.792657176" observedRunningTime="2025-10-20 12:41:09.091985669 +0000 UTC m=+12.158976030" watchObservedRunningTime="2025-10-20 12:41:09.092301235 +0000 UTC m=+12.159291584"
	Oct 20 12:41:12 no-preload-649841 kubelet[713]: I1020 12:41:12.842055     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:12 no-preload-649841 kubelet[713]: E1020 12:41:12.842281     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: I1020 12:41:25.024055     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: I1020 12:41:25.119761     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: I1020 12:41:25.120017     713 scope.go:117] "RemoveContainer" containerID="fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: E1020 12:41:25.120219     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:31 no-preload-649841 kubelet[713]: I1020 12:41:31.136576     713 scope.go:117] "RemoveContainer" containerID="47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8"
	Oct 20 12:41:32 no-preload-649841 kubelet[713]: I1020 12:41:32.842435     713 scope.go:117] "RemoveContainer" containerID="fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	Oct 20 12:41:32 no-preload-649841 kubelet[713]: E1020 12:41:32.842642     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:44 no-preload-649841 kubelet[713]: I1020 12:41:44.023137     713 scope.go:117] "RemoveContainer" containerID="fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	Oct 20 12:41:44 no-preload-649841 kubelet[713]: E1020 12:41:44.023360     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:47 no-preload-649841 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:41:47 no-preload-649841 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:41:47 no-preload-649841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:41:47 no-preload-649841 systemd[1]: kubelet.service: Consumed 1.600s CPU time.
	
	
	==> kubernetes-dashboard [0550ddaaca138162356cb67e6b85432b155df954ed848975ffed2389b56fd043] <==
	2025/10/20 12:41:08 Starting overwatch
	2025/10/20 12:41:08 Using namespace: kubernetes-dashboard
	2025/10/20 12:41:08 Using in-cluster config to connect to apiserver
	2025/10/20 12:41:08 Using secret token for csrf signing
	2025/10/20 12:41:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:41:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:41:08 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:41:08 Generating JWE encryption key
	2025/10/20 12:41:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:41:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:41:08 Initializing JWE encryption key from synchronized object
	2025/10/20 12:41:08 Creating in-cluster Sidecar client
	2025/10/20 12:41:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:41:08 Serving insecurely on HTTP port: 9090
	2025/10/20 12:41:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8] <==
	I1020 12:41:00.392515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:41:30.395147       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034] <==
	I1020 12:41:31.181394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:41:31.188582       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:41:31.188622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:41:31.190785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:34.646062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:38.906367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:42.504553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:45.557621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:48.579499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:48.585370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:41:48.585557       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:41:48.585697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-649841_f48f612d-6093-43ab-aab0-672a9db17fa2!
	I1020 12:41:48.585693       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4fe8ee9-f82c-4cee-82a6-30314a2d696f", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-649841_f48f612d-6093-43ab-aab0-672a9db17fa2 became leader
	W1020 12:41:48.587753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:48.593889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:41:48.686526       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-649841_f48f612d-6093-43ab-aab0-672a9db17fa2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-649841 -n no-preload-649841
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-649841 -n no-preload-649841: exit status 2 (333.268779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-649841 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-649841
helpers_test.go:243: (dbg) docker inspect no-preload-649841:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a",
	        "Created": "2025-10-20T12:39:34.746845301Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:40:49.95458922Z",
	            "FinishedAt": "2025-10-20T12:40:49.040094866Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/hosts",
	        "LogPath": "/var/lib/docker/containers/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a/3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a-json.log",
	        "Name": "/no-preload-649841",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-649841:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-649841",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ebdc406ea0072d3994b3f46be1cc7faceee48a3ec99ece18f12dec0a60c2c8a",
	                "LowerDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd7335f5d0e8f3ae86c6eebf190fb273780f3e6fe176c1c06ca1b78cffb62873/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-649841",
	                "Source": "/var/lib/docker/volumes/no-preload-649841/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-649841",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-649841",
	                "name.minikube.sigs.k8s.io": "no-preload-649841",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0163ed58e91cc363b014c9e64b219fd6b9081774ea1d7cefde489f36afdd44e6",
	            "SandboxKey": "/var/run/docker/netns/0163ed58e91c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-649841": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:05:c9:d8:d8:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6720b99a1b6d91a202341926290513ef2c609bf0485dc9d73b76615c6b693c13",
	                    "EndpointID": "4ca837274b57372c9d685a025f52e1a02e0935ec30fb9143ee19619338fdc860",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-649841",
	                        "3ebdc406ea00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841: exit status 2 (315.479511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-649841 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-649841 logs -n 25: (1.220953417s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ force-systemd-flag-670413 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-670413    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p force-systemd-flag-670413                                                                                                                                                                                                                  │ force-systemd-flag-670413    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ cert-options-418869 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-418869          │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ ssh     │ -p cert-options-418869 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-418869          │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ delete  │ -p cert-options-418869                                                                                                                                                                                                                        │ cert-options-418869          │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:40 UTC │
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:41:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:41:33.149481  252906 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:41:33.149783  252906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:33.149793  252906 out.go:374] Setting ErrFile to fd 2...
	I1020 12:41:33.149798  252906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:41:33.150032  252906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:41:33.150595  252906 out.go:368] Setting JSON to false
	I1020 12:41:33.151924  252906 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5042,"bootTime":1760959051,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:41:33.152044  252906 start.go:141] virtualization: kvm guest
	I1020 12:41:33.154542  252906 out.go:179] * [default-k8s-diff-port-874012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:41:33.156078  252906 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:41:33.156074  252906 notify.go:220] Checking for updates...
	I1020 12:41:33.157638  252906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:41:33.159126  252906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:41:33.160329  252906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:41:33.161693  252906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:41:33.163016  252906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:41:33.164900  252906 config.go:182] Loaded profile config "cert-expiration-365628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:33.165003  252906 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:33.165091  252906 config.go:182] Loaded profile config "no-preload-649841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:33.165180  252906 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:41:33.189720  252906 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:41:33.189819  252906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:41:33.252158  252906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:41:33.240526552 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:41:33.252276  252906 docker.go:318] overlay module found
	I1020 12:41:33.254285  252906 out.go:179] * Using the docker driver based on user configuration
	I1020 12:41:33.255872  252906 start.go:305] selected driver: docker
	I1020 12:41:33.255890  252906 start.go:925] validating driver "docker" against <nil>
	I1020 12:41:33.255905  252906 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:41:33.256448  252906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:41:33.314053  252906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:41:33.303046441 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:41:33.314236  252906 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:41:33.314456  252906 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:41:33.316237  252906 out.go:179] * Using Docker driver with root privileges
	I1020 12:41:33.317393  252906 cni.go:84] Creating CNI manager for ""
	I1020 12:41:33.317469  252906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:41:33.317481  252906 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:41:33.317556  252906 start.go:349] cluster config:
	{Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:41:33.318953  252906 out.go:179] * Starting "default-k8s-diff-port-874012" primary control-plane node in "default-k8s-diff-port-874012" cluster
	I1020 12:41:33.320223  252906 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:41:33.321626  252906 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:41:33.322749  252906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:41:33.322809  252906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:41:33.322832  252906 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:41:33.322847  252906 cache.go:58] Caching tarball of preloaded images
	I1020 12:41:33.322971  252906 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:41:33.322981  252906 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:41:33.323077  252906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json ...
	I1020 12:41:33.323100  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json: {Name:mkbaf95fe95383d81bbdcce007e08d73cbbc5331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:33.344046  252906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:41:33.344069  252906 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:41:33.344102  252906 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:41:33.344130  252906 start.go:360] acquireMachinesLock for default-k8s-diff-port-874012: {Name:mk3fe7fe7ce0d8961f5f623b6e43bccc5068bc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:41:33.344237  252906 start.go:364] duration metric: took 87.067µs to acquireMachinesLock for "default-k8s-diff-port-874012"
	I1020 12:41:33.344266  252906 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:41:33.344345  252906 start.go:125] createHost starting for "" (driver="docker")
	W1020 12:41:31.511940  246403 pod_ready.go:104] pod "coredns-66bc5c9577-7d88p" is not "Ready", error: <nil>
	I1020 12:41:34.010841  246403 pod_ready.go:94] pod "coredns-66bc5c9577-7d88p" is "Ready"
	I1020 12:41:34.010866  246403 pod_ready.go:86] duration metric: took 33.005722872s for pod "coredns-66bc5c9577-7d88p" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.013315  246403 pod_ready.go:83] waiting for pod "etcd-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.017174  246403 pod_ready.go:94] pod "etcd-no-preload-649841" is "Ready"
	I1020 12:41:34.017195  246403 pod_ready.go:86] duration metric: took 3.859597ms for pod "etcd-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.019131  246403 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.022716  246403 pod_ready.go:94] pod "kube-apiserver-no-preload-649841" is "Ready"
	I1020 12:41:34.022737  246403 pod_ready.go:86] duration metric: took 3.586444ms for pod "kube-apiserver-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.024529  246403 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.209736  246403 pod_ready.go:94] pod "kube-controller-manager-no-preload-649841" is "Ready"
	I1020 12:41:34.209762  246403 pod_ready.go:86] duration metric: took 185.214305ms for pod "kube-controller-manager-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.408994  246403 pod_ready.go:83] waiting for pod "kube-proxy-6vpwz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:34.809293  246403 pod_ready.go:94] pod "kube-proxy-6vpwz" is "Ready"
	I1020 12:41:34.809322  246403 pod_ready.go:86] duration metric: took 400.303842ms for pod "kube-proxy-6vpwz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:35.009721  246403 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:35.409564  246403 pod_ready.go:94] pod "kube-scheduler-no-preload-649841" is "Ready"
	I1020 12:41:35.409594  246403 pod_ready.go:86] duration metric: took 399.84125ms for pod "kube-scheduler-no-preload-649841" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:41:35.409608  246403 pod_ready.go:40] duration metric: took 34.40803163s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:41:35.457296  246403 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:41:35.460364  246403 out.go:179] * Done! kubectl is now configured to use "no-preload-649841" cluster and "default" namespace by default
	I1020 12:41:31.641472  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:31.641935  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:31.641986  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:31.642050  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:31.670466  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:31.670488  236655 cri.go:89] found id: ""
	I1020 12:41:31.670496  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:31.670544  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:31.674547  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:31.674609  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:31.702395  236655 cri.go:89] found id: ""
	I1020 12:41:31.702419  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.702429  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:31.702435  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:31.702496  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:31.730192  236655 cri.go:89] found id: ""
	I1020 12:41:31.730219  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.730228  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:31.730234  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:31.730289  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:31.760024  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:31.760046  236655 cri.go:89] found id: ""
	I1020 12:41:31.760056  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:31.760122  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:31.764226  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:31.764294  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:31.811664  236655 cri.go:89] found id: ""
	I1020 12:41:31.811691  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.811700  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:31.811705  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:31.811780  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:31.846253  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:31.846281  236655 cri.go:89] found id: ""
	I1020 12:41:31.846292  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:31.846379  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:31.850833  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:31.850934  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:31.879925  236655 cri.go:89] found id: ""
	I1020 12:41:31.879948  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.879959  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:31.879965  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:31.880023  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:31.909123  236655 cri.go:89] found id: ""
	I1020 12:41:31.909154  236655 logs.go:282] 0 containers: []
	W1020 12:41:31.909166  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:31.909177  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:31.909191  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:31.924661  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:31.924688  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:31.986833  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:31.986857  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:31.986868  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:32.023276  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:32.023307  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:32.073966  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:32.074000  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:32.101452  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:32.101481  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:32.155708  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:32.155747  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:32.187309  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:32.187331  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:34.764842  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:34.765235  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:34.765283  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:34.765348  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:34.795171  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:34.795195  236655 cri.go:89] found id: ""
	I1020 12:41:34.795204  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:34.795266  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:34.799282  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:34.799356  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:34.829250  236655 cri.go:89] found id: ""
	I1020 12:41:34.829279  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.829310  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:34.829318  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:34.829369  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:34.857659  236655 cri.go:89] found id: ""
	I1020 12:41:34.857688  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.857700  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:34.857707  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:34.857797  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:34.886515  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:34.886538  236655 cri.go:89] found id: ""
	I1020 12:41:34.886550  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:34.886617  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:34.891087  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:34.891169  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:34.920955  236655 cri.go:89] found id: ""
	I1020 12:41:34.920986  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.920997  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:34.921005  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:34.921073  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:34.949538  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:34.949555  236655 cri.go:89] found id: ""
	I1020 12:41:34.949564  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:34.949624  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:34.953690  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:34.953767  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:34.982184  236655 cri.go:89] found id: ""
	I1020 12:41:34.982215  236655 logs.go:282] 0 containers: []
	W1020 12:41:34.982226  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:34.982234  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:34.982296  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:35.012910  236655 cri.go:89] found id: ""
	I1020 12:41:35.012933  236655 logs.go:282] 0 containers: []
	W1020 12:41:35.012943  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:35.012954  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:35.012969  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:35.029874  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:35.029909  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:35.091944  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:35.091962  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:35.091973  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:35.133388  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:35.133440  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:35.183700  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:35.183737  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:35.214834  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:35.214867  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:35.270009  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:35.270045  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:35.306207  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:35.306244  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:33.346299  252906 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:41:33.346514  252906 start.go:159] libmachine.API.Create for "default-k8s-diff-port-874012" (driver="docker")
	I1020 12:41:33.346544  252906 client.go:168] LocalClient.Create starting
	I1020 12:41:33.346600  252906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:41:33.346629  252906 main.go:141] libmachine: Decoding PEM data...
	I1020 12:41:33.346646  252906 main.go:141] libmachine: Parsing certificate...
	I1020 12:41:33.346711  252906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:41:33.346733  252906 main.go:141] libmachine: Decoding PEM data...
	I1020 12:41:33.346741  252906 main.go:141] libmachine: Parsing certificate...
	I1020 12:41:33.347123  252906 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:41:33.365615  252906 cli_runner.go:211] docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:41:33.365683  252906 network_create.go:284] running [docker network inspect default-k8s-diff-port-874012] to gather additional debugging logs...
	I1020 12:41:33.365701  252906 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012
	W1020 12:41:33.384046  252906 cli_runner.go:211] docker network inspect default-k8s-diff-port-874012 returned with exit code 1
	I1020 12:41:33.384079  252906 network_create.go:287] error running [docker network inspect default-k8s-diff-port-874012]: docker network inspect default-k8s-diff-port-874012: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-874012 not found
	I1020 12:41:33.384111  252906 network_create.go:289] output of [docker network inspect default-k8s-diff-port-874012]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-874012 not found
	
	** /stderr **
	I1020 12:41:33.384200  252906 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:41:33.403248  252906 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:41:33.404125  252906 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:41:33.404833  252906 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:41:33.405154  252906 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1f871d5cfd48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:c6:86:42:b6:13} reservation:<nil>}
	I1020 12:41:33.405727  252906 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6720b99a1b6d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6e:e8:d3:69:12:f1} reservation:<nil>}
	I1020 12:41:33.406293  252906 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-4b75e071d2ef IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:a6:2a:37:02:57:60} reservation:<nil>}
	I1020 12:41:33.407467  252906 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e97f50}
	I1020 12:41:33.407495  252906 network_create.go:124] attempt to create docker network default-k8s-diff-port-874012 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1020 12:41:33.407560  252906 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 default-k8s-diff-port-874012
	I1020 12:41:33.470088  252906 network_create.go:108] docker network default-k8s-diff-port-874012 192.168.103.0/24 created
	I1020 12:41:33.470121  252906 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-874012" container
	I1020 12:41:33.470193  252906 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:41:33.489214  252906 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-874012 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:41:33.509191  252906 oci.go:103] Successfully created a docker volume default-k8s-diff-port-874012
	I1020 12:41:33.509278  252906 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-874012-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --entrypoint /usr/bin/test -v default-k8s-diff-port-874012:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:41:33.907116  252906 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-874012
	I1020 12:41:33.907150  252906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:41:33.907170  252906 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:41:33.907227  252906 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-874012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:41:37.893670  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:37.894143  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:37.894195  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:37.894245  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:37.924142  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:37.924171  236655 cri.go:89] found id: ""
	I1020 12:41:37.924181  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:37.924240  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:37.928284  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:37.928346  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:37.957570  236655 cri.go:89] found id: ""
	I1020 12:41:37.957596  236655 logs.go:282] 0 containers: []
	W1020 12:41:37.957607  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:37.957614  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:37.957675  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:37.987138  236655 cri.go:89] found id: ""
	I1020 12:41:37.987160  236655 logs.go:282] 0 containers: []
	W1020 12:41:37.987169  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:37.987177  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:37.987244  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:38.015383  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:38.015408  236655 cri.go:89] found id: ""
	I1020 12:41:38.015418  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:38.015484  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:38.020299  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:38.020384  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:38.049441  236655 cri.go:89] found id: ""
	I1020 12:41:38.049465  236655 logs.go:282] 0 containers: []
	W1020 12:41:38.049472  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:38.049477  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:38.049527  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:38.078251  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:38.078276  236655 cri.go:89] found id: ""
	I1020 12:41:38.078287  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:38.078349  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:38.082472  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:38.082532  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:38.111176  236655 cri.go:89] found id: ""
	I1020 12:41:38.111202  236655 logs.go:282] 0 containers: []
	W1020 12:41:38.111213  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:38.111226  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:38.111281  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:38.139967  236655 cri.go:89] found id: ""
	I1020 12:41:38.139996  236655 logs.go:282] 0 containers: []
	W1020 12:41:38.140004  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:38.140015  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:38.140028  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:38.172044  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:38.172079  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:38.244222  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:38.244260  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:38.259049  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:38.259078  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:38.318419  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:38.318439  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:38.318452  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:38.353857  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:38.353888  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:38.398589  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:38.398628  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:38.428996  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:38.429024  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:40.978845  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:40.979326  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:40.979375  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:40.979466  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:41.015302  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:41.015328  236655 cri.go:89] found id: ""
	I1020 12:41:41.015335  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:41.015383  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.019689  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:41.019789  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:41.047218  236655 cri.go:89] found id: ""
	I1020 12:41:41.047240  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.047250  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:41.047256  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:41.047319  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:41.077150  236655 cri.go:89] found id: ""
	I1020 12:41:41.077174  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.077181  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:41.077188  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:41.077239  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:41.106848  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:41.106867  236655 cri.go:89] found id: ""
	I1020 12:41:41.106874  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:41.106931  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.112104  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:41.112175  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:38.448910  252906 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-874012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.541633419s)
	I1020 12:41:38.448941  252906 kic.go:203] duration metric: took 4.541766758s to extract preloaded images to volume ...
	W1020 12:41:38.449028  252906 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:41:38.449065  252906 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:41:38.449114  252906 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:41:38.508433  252906 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-874012 --name default-k8s-diff-port-874012 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-874012 --network default-k8s-diff-port-874012 --ip 192.168.103.2 --volume default-k8s-diff-port-874012:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:41:38.788829  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Running}}
	I1020 12:41:38.806661  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:41:38.825433  252906 cli_runner.go:164] Run: docker exec default-k8s-diff-port-874012 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:41:38.867316  252906 oci.go:144] the created container "default-k8s-diff-port-874012" has a running status.
	I1020 12:41:38.867359  252906 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa...
	I1020 12:41:39.064103  252906 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:41:39.097977  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:41:39.121314  252906 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:41:39.121347  252906 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-874012 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:41:39.170120  252906 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:41:39.189740  252906 machine.go:93] provisionDockerMachine start ...
	I1020 12:41:39.189873  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.213690  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.214067  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.214090  252906 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:41:39.358718  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-874012
	
	I1020 12:41:39.358746  252906 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-874012"
	I1020 12:41:39.358826  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.378506  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.378840  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.378869  252906 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-874012 && echo "default-k8s-diff-port-874012" | sudo tee /etc/hostname
	I1020 12:41:39.533271  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-874012
	
	I1020 12:41:39.533361  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.552494  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.552736  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.552764  252906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-874012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-874012/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-874012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:41:39.693762  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:41:39.693825  252906 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:41:39.693853  252906 ubuntu.go:190] setting up certificates
	I1020 12:41:39.693865  252906 provision.go:84] configureAuth start
	I1020 12:41:39.693928  252906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:41:39.711350  252906 provision.go:143] copyHostCerts
	I1020 12:41:39.711427  252906 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:41:39.711444  252906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:41:39.711519  252906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:41:39.711635  252906 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:41:39.711649  252906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:41:39.711690  252906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:41:39.711808  252906 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:41:39.711820  252906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:41:39.711860  252906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:41:39.711945  252906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-874012 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-874012 localhost minikube]
	I1020 12:41:39.773764  252906 provision.go:177] copyRemoteCerts
	I1020 12:41:39.773843  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:41:39.773896  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.793530  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:39.894356  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:41:39.915345  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:41:39.934083  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1020 12:41:39.952766  252906 provision.go:87] duration metric: took 258.888784ms to configureAuth
	I1020 12:41:39.952809  252906 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:41:39.952958  252906 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:41:39.953073  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:39.971448  252906 main.go:141] libmachine: Using SSH client type: native
	I1020 12:41:39.971719  252906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1020 12:41:39.971739  252906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:41:40.227824  252906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:41:40.227856  252906 machine.go:96] duration metric: took 1.03808794s to provisionDockerMachine
	I1020 12:41:40.227868  252906 client.go:171] duration metric: took 6.881317923s to LocalClient.Create
	I1020 12:41:40.227890  252906 start.go:167] duration metric: took 6.881374822s to libmachine.API.Create "default-k8s-diff-port-874012"
	I1020 12:41:40.227900  252906 start.go:293] postStartSetup for "default-k8s-diff-port-874012" (driver="docker")
	I1020 12:41:40.227915  252906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:41:40.227971  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:41:40.228005  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.247306  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.351204  252906 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:41:40.355388  252906 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:41:40.355413  252906 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:41:40.355425  252906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:41:40.355484  252906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:41:40.355558  252906 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:41:40.355659  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:41:40.363884  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:41:40.385115  252906 start.go:296] duration metric: took 157.197959ms for postStartSetup
	I1020 12:41:40.385489  252906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:41:40.403999  252906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json ...
	I1020 12:41:40.404375  252906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:41:40.404427  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.425251  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.527362  252906 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:41:40.532184  252906 start.go:128] duration metric: took 7.187825267s to createHost
	I1020 12:41:40.532210  252906 start.go:83] releasing machines lock for "default-k8s-diff-port-874012", held for 7.187959407s
	I1020 12:41:40.532272  252906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:41:40.550973  252906 ssh_runner.go:195] Run: cat /version.json
	I1020 12:41:40.551030  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.551109  252906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:41:40.551179  252906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:41:40.570419  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.570910  252906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:41:40.720672  252906 ssh_runner.go:195] Run: systemctl --version
	I1020 12:41:40.727600  252906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:41:40.763207  252906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:41:40.768481  252906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:41:40.768548  252906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:41:40.797033  252906 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:41:40.797055  252906 start.go:495] detecting cgroup driver to use...
	I1020 12:41:40.797083  252906 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:41:40.797128  252906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:41:40.814526  252906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:41:40.828227  252906 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:41:40.828304  252906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:41:40.846128  252906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:41:40.864527  252906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:41:40.947458  252906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:41:41.046598  252906 docker.go:234] disabling docker service ...
	I1020 12:41:41.046684  252906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:41:41.068592  252906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:41:41.082879  252906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:41:41.176160  252906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:41:41.275909  252906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:41:41.288953  252906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:41:41.305202  252906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:41:41.305253  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.316186  252906 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:41:41.316246  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.325374  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.334731  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.343901  252906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:41:41.352457  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.361612  252906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.376034  252906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:41:41.384726  252906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:41:41.392483  252906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:41:41.400081  252906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:41:41.487460  252906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:41:41.596147  252906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:41:41.596214  252906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:41:41.600141  252906 start.go:563] Will wait 60s for crictl version
	I1020 12:41:41.600202  252906 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.603530  252906 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:41:41.627656  252906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:41:41.627744  252906 ssh_runner.go:195] Run: crio --version
	I1020 12:41:41.655169  252906 ssh_runner.go:195] Run: crio --version
	I1020 12:41:41.684747  252906 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:41:41.686287  252906 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:41:41.703863  252906 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1020 12:41:41.707871  252906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:41:41.718108  252906 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:41:41.718233  252906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:41:41.718284  252906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:41:41.750555  252906 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:41:41.750590  252906 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:41:41.750643  252906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:41:41.775520  252906 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:41:41.775543  252906 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:41:41.775550  252906 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1020 12:41:41.775629  252906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-874012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:41:41.775706  252906 ssh_runner.go:195] Run: crio config
	I1020 12:41:41.823332  252906 cni.go:84] Creating CNI manager for ""
	I1020 12:41:41.823358  252906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:41:41.823379  252906 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:41:41.823411  252906 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-874012 NodeName:default-k8s-diff-port-874012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:41:41.823560  252906 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-874012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:41:41.823619  252906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:41:41.834290  252906 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:41:41.834359  252906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:41:41.842658  252906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1020 12:41:41.856438  252906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:41:41.873412  252906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1020 12:41:41.886730  252906 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:41:41.890592  252906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:41:41.900719  252906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:41:41.982195  252906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:41:42.009414  252906 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012 for IP: 192.168.103.2
	I1020 12:41:42.009436  252906 certs.go:195] generating shared ca certs ...
	I1020 12:41:42.009452  252906 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.009606  252906 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:41:42.009672  252906 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:41:42.009687  252906 certs.go:257] generating profile certs ...
	I1020 12:41:42.009757  252906 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key
	I1020 12:41:42.009795  252906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.crt with IP's: []
	I1020 12:41:42.127946  252906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.crt ...
	I1020 12:41:42.127974  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.crt: {Name:mk38e41c5d5d89138fd1da3f4f42e460c3181c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.128193  252906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key ...
	I1020 12:41:42.128213  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key: {Name:mkedf9a96f34a9715127b774381ab8ca235193aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.128336  252906 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681
	I1020 12:41:42.128365  252906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1020 12:41:42.265179  252906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681 ...
	I1020 12:41:42.265209  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681: {Name:mk9079ed6aaac93802c324fb6801c56265d3df6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.265411  252906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681 ...
	I1020 12:41:42.265433  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681: {Name:mk40008c3c43218f8c68d7a345c739cd23609329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.265542  252906 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt.fa6ae681 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt
	I1020 12:41:42.265642  252906 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681 -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key
	I1020 12:41:42.265722  252906 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key
	I1020 12:41:42.265744  252906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt with IP's: []
	I1020 12:41:42.293541  252906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt ...
	I1020 12:41:42.293577  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt: {Name:mka62b7ef50f8343c9070bcfedbcc5d571031780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.293815  252906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key ...
	I1020 12:41:42.293838  252906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key: {Name:mk478b3ae49d16722839adbcd74f5bc870eeccc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:41:42.294038  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:41:42.294072  252906 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:41:42.294082  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:41:42.294104  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:41:42.294128  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:41:42.294151  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:41:42.294189  252906 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:41:42.294726  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:41:42.313886  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:41:42.332363  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:41:42.351032  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:41:42.369764  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1020 12:41:42.388015  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:41:42.406621  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:41:42.426144  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:41:42.444399  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:41:42.465232  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:41:42.483960  252906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:41:42.503413  252906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:41:42.517686  252906 ssh_runner.go:195] Run: openssl version
	I1020 12:41:42.524277  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:41:42.533473  252906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:41:42.537684  252906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:41:42.537740  252906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:41:42.572620  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:41:42.582110  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:41:42.591313  252906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:41:42.595418  252906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:41:42.595476  252906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:41:42.630020  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:41:42.639951  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:41:42.649277  252906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:41:42.653262  252906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:41:42.653326  252906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:41:42.688218  252906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:41:42.697796  252906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:41:42.701587  252906 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:41:42.701642  252906 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:41:42.701702  252906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:41:42.701749  252906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:41:42.729005  252906 cri.go:89] found id: ""
	I1020 12:41:42.729086  252906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:41:42.737616  252906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:41:42.745921  252906 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:41:42.745972  252906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:41:42.753709  252906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:41:42.753724  252906 kubeadm.go:157] found existing configuration files:
	
	I1020 12:41:42.753786  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1020 12:41:42.761880  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:41:42.761943  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:41:42.769766  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1020 12:41:42.777911  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:41:42.777965  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:41:42.786313  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1020 12:41:42.794245  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:41:42.794311  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:41:42.801979  252906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1020 12:41:42.809889  252906 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:41:42.809942  252906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:41:42.817648  252906 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:41:42.888159  252906 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:41:42.949155  252906 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:41:41.143866  236655 cri.go:89] found id: ""
	I1020 12:41:41.143918  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.143927  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:41.143932  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:41.144014  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:41.172645  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:41.172667  236655 cri.go:89] found id: ""
	I1020 12:41:41.172675  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:41.172731  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:41.177284  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:41.177354  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:41.207766  236655 cri.go:89] found id: ""
	I1020 12:41:41.207822  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.207834  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:41.207842  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:41.208022  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:41.243633  236655 cri.go:89] found id: ""
	I1020 12:41:41.243664  236655 logs.go:282] 0 containers: []
	W1020 12:41:41.243675  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:41.243686  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:41.243701  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:41.278650  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:41.278675  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:41.356683  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:41.356709  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:41.372645  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:41.372682  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:41.438336  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:41.438360  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:41.438375  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:41.470723  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:41.470757  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:41.519042  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:41.519073  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:41.546571  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:41.546596  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:44.090908  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:41:44.091367  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:41:44.091426  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:41:44.091482  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:41:44.120355  236655 cri.go:89] found id: "7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:44.120387  236655 cri.go:89] found id: ""
	I1020 12:41:44.120397  236655 logs.go:282] 1 containers: [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c]
	I1020 12:41:44.120458  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:44.124692  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:41:44.124766  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:41:44.153937  236655 cri.go:89] found id: ""
	I1020 12:41:44.153968  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.153979  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:41:44.153986  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:41:44.154044  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:41:44.183327  236655 cri.go:89] found id: ""
	I1020 12:41:44.183356  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.183367  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:41:44.183375  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:41:44.183455  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:41:44.212833  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:44.212857  236655 cri.go:89] found id: ""
	I1020 12:41:44.212865  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:41:44.212919  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:44.217029  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:41:44.217106  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:41:44.246745  236655 cri.go:89] found id: ""
	I1020 12:41:44.246792  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.246802  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:41:44.246809  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:41:44.246869  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:41:44.274686  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:44.274707  236655 cri.go:89] found id: ""
	I1020 12:41:44.274716  236655 logs.go:282] 1 containers: [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:41:44.274795  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:41:44.279076  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:41:44.279151  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:41:44.307073  236655 cri.go:89] found id: ""
	I1020 12:41:44.307108  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.307118  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:41:44.307124  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:41:44.307187  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:41:44.334948  236655 cri.go:89] found id: ""
	I1020 12:41:44.334975  236655 logs.go:282] 0 containers: []
	W1020 12:41:44.334982  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:41:44.334991  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:41:44.335003  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:41:44.369227  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:41:44.369259  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:41:44.450360  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:41:44.450396  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:41:44.465876  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:41:44.465906  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:41:44.527399  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:41:44.527419  236655 logs.go:123] Gathering logs for kube-apiserver [7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c] ...
	I1020 12:41:44.527431  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7faf9e3b9dab1c2cd57940b02fbe33ece7793167de1feb4ba5eb577fbff3327c"
	I1020 12:41:44.565085  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:41:44.565132  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:41:44.613481  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:41:44.613519  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:41:44.641223  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:41:44.641252  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:41:47.193849  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 20 12:41:10 no-preload-649841 crio[562]: time="2025-10-20T12:41:10.86130982Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 12:41:10 no-preload-649841 crio[562]: time="2025-10-20T12:41:10.864750323Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 12:41:10 no-preload-649841 crio[562]: time="2025-10-20T12:41:10.864799263Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.024529935Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52d91a99-a52c-483e-8fe9-e36cceb54603 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.02734188Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=681edce1-807f-41cc-922b-c3edc06c818d name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.030339029Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper" id=6503d84b-d380-440a-9d90-8729d11dadf5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.03048381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.03744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.037974373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.066615373Z" level=info msg="Created container fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper" id=6503d84b-d380-440a-9d90-8729d11dadf5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.067285456Z" level=info msg="Starting container: fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614" id=1d2b7e7f-8d58-40c6-b532-07cffbd957da name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.069204084Z" level=info msg="Started container" PID=1760 containerID=fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper id=1d2b7e7f-8d58-40c6-b532-07cffbd957da name=/runtime.v1.RuntimeService/StartContainer sandboxID=2800f6ce54b66817f3594b83ff7b336311a2b16931c0c2c57a623d1bf7c03b90
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.12109394Z" level=info msg="Removing container: 1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d" id=8244e132-79ab-4e71-be02-d62383cdfae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:41:25 no-preload-649841 crio[562]: time="2025-10-20T12:41:25.130591963Z" level=info msg="Removed container 1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk/dashboard-metrics-scraper" id=8244e132-79ab-4e71-be02-d62383cdfae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.136920486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=935462bc-4ef8-42b5-b9f3-afbe15efe0ac name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.137801885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=089fef40-6dac-47c2-97e4-6ea6899a902a name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.13883867Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ef87b44a-3d11-4ade-b5d4-f365dca98956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.138957252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143130534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143260166Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0997648d10bbd82f0b8e05382d7efac89ed6797f0ff2b8baed2f9aeff4287a16/merged/etc/passwd: no such file or directory"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143282405Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0997648d10bbd82f0b8e05382d7efac89ed6797f0ff2b8baed2f9aeff4287a16/merged/etc/group: no such file or directory"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.143477328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.166919409Z" level=info msg="Created container f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034: kube-system/storage-provisioner/storage-provisioner" id=ef87b44a-3d11-4ade-b5d4-f365dca98956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.16748807Z" level=info msg="Starting container: f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034" id=059adba1-3470-4213-adc0-b15237367c09 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:41:31 no-preload-649841 crio[562]: time="2025-10-20T12:41:31.169621374Z" level=info msg="Started container" PID=1775 containerID=f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034 description=kube-system/storage-provisioner/storage-provisioner id=059adba1-3470-4213-adc0-b15237367c09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d48793b3885007d0ad15bc7c21101e1839f4b6c53d9fc00b4af4b04c44513bcc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f430bf4944f7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   d48793b388500       storage-provisioner                          kube-system
	fa3a1311ff92d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   2800f6ce54b66       dashboard-metrics-scraper-6ffb444bf9-kkwfk   kubernetes-dashboard
	0550ddaaca138       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   581d07710cacb       kubernetes-dashboard-855c9754f9-48d7f        kubernetes-dashboard
	1c175f02085b5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   5bafa486ce7c7       busybox                                      default
	61fe223c6fe5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   45c89a78a6317       coredns-66bc5c9577-7d88p                     kube-system
	4b705515b3e6c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   c7d08cbce3510       kube-proxy-6vpwz                             kube-system
	47543b902bb8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   d48793b388500       storage-provisioner                          kube-system
	b299c1600a1eb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   60696fb7f9389       kindnet-ghtcz                                kube-system
	816d9c037942c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   d1d423cb588d4       kube-controller-manager-no-preload-649841    kube-system
	49212f5520e23       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   c959d95137748       etcd-no-preload-649841                       kube-system
	bf13bdfc60d3a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   7f1fe04ff6946       kube-scheduler-no-preload-649841             kube-system
	28717124ea3c3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   0a35cf2f48043       kube-apiserver-no-preload-649841             kube-system
	
	
	==> coredns [61fe223c6fe5bcb16adc3e355e55c3fbe804f30fc5ce435434798668a773ca35] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41740 - 49910 "HINFO IN 744305241757770200.8402828542348679612. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05388471s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-649841
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-649841
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=no-preload-649841
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_40_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:39:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-649841
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:41:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:39:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:41:40 +0000   Mon, 20 Oct 2025 12:40:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-649841
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                433a6564-548d-4f1d-8a4a-223c020110ee
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-7d88p                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-649841                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-ghtcz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-649841              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-649841     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-6vpwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-649841              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kkwfk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-48d7f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node no-preload-649841 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node no-preload-649841 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node no-preload-649841 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node no-preload-649841 event: Registered Node no-preload-649841 in Controller
	  Normal  NodeReady                92s                kubelet          Node no-preload-649841 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node no-preload-649841 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node no-preload-649841 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node no-preload-649841 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node no-preload-649841 event: Registered Node no-preload-649841 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [49212f5520e23aa6f4699b58e138ce3c6899c074fd04839a3812363c6bf726d0] <==
	{"level":"warn","ts":"2025-10-20T12:40:58.859498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.865469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.872588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.878957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.885102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.891835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.900293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.907233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.913699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.921540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.927612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.933928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.941072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.947735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.954011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.960549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.967238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.973952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.979857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.993014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:58.999989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:59.006373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:40:59.055351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:14.195663Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.587305ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502384797684 > lease_revoke:<id:06ed9a01a231cf6b>","response":"size:28"}
	{"level":"info","ts":"2025-10-20T12:41:14.195957Z","caller":"traceutil/trace.go:172","msg":"trace[1120950342] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"104.826728ms","start":"2025-10-20T12:41:14.091115Z","end":"2025-10-20T12:41:14.195942Z","steps":["trace[1120950342] 'process raft request'  (duration: 104.700836ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:41:52 up  1:24,  0 user,  load average: 2.30, 3.19, 2.05
	Linux no-preload-649841 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b299c1600a1eb44936aedd6cde2e8365c9906379c50dd89eb8ad705c657a863d] <==
	I1020 12:41:00.641788       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:41:00.642066       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:41:00.642258       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:41:00.642281       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:41:00.642308       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:41:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:41:00.843010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:41:00.843658       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:41:00.843693       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:41:00.843860       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:41:01.338102       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:41:01.338135       1 metrics.go:72] Registering metrics
	I1020 12:41:01.338209       1 controller.go:711] "Syncing nftables rules"
	I1020 12:41:10.842844       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:10.842894       1 main.go:301] handling current node
	I1020 12:41:20.847724       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:20.847757       1 main.go:301] handling current node
	I1020 12:41:30.843354       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:30.843395       1 main.go:301] handling current node
	I1020 12:41:40.843001       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:40.843037       1 main.go:301] handling current node
	I1020 12:41:50.851914       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 12:41:50.851941       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28717124ea3c362de3161e549a9412d0e0beda3ede0b813f19be2debafac8bd1] <==
	I1020 12:40:59.513256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:40:59.513267       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:40:59.513145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:40:59.513294       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:40:59.513202       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 12:40:59.519489       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:40:59.521872       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:40:59.522449       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:40:59.522026       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:40:59.521983       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:40:59.531792       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:40:59.540551       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 12:40:59.540697       1 policy_source.go:240] refreshing policies
	I1020 12:40:59.581900       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:40:59.777534       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:40:59.805521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:40:59.824037       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:40:59.834260       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:40:59.840873       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:40:59.876379       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.203.217"}
	I1020 12:40:59.885974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.230.183"}
	I1020 12:41:00.416330       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:41:02.840000       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:41:03.288863       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:41:03.338119       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [816d9c037942c04231fca4c103de9e2bf20fdf60fa1761988b5c578a09691679] <==
	I1020 12:41:02.813736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:41:02.816074       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:41:02.817285       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:41:02.819549       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 12:41:02.820802       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:41:02.823037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 12:41:02.835498       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:41:02.835510       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:41:02.835541       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:41:02.835552       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:41:02.835585       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 12:41:02.835614       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:41:02.835633       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 12:41:02.835655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:41:02.835685       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:41:02.835699       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:41:02.839174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:41:02.840316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:41:02.840337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:41:02.840450       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:41:02.841511       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:41:02.841565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:41:02.841614       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:41:02.845575       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:41:02.860850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4b705515b3e6c7ede78b49b5e0fb2e2465d9214e74325acdefd45ec4d57b7057] <==
	I1020 12:41:00.427080       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:41:00.482052       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:41:00.582534       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:41:00.582573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:41:00.582666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:41:00.603753       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:41:00.603827       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:41:00.608751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:41:00.609591       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:41:00.609661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:41:00.611829       1 config.go:200] "Starting service config controller"
	I1020 12:41:00.611899       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:41:00.611952       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:41:00.611990       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:41:00.612023       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:41:00.612047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:41:00.612269       1 config.go:309] "Starting node config controller"
	I1020 12:41:00.612311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:41:00.612337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:41:00.712029       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:41:00.712103       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:41:00.712117       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bf13bdfc60d3a55c47badd4fa2e0a4042348a310ddce98adaa907a594a64d40d] <==
	I1020 12:40:59.479766       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:40:59.479923       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:40:59.484921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:40:59.484963       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:40:59.485990       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:40:59.486098       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1020 12:40:59.489587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:40:59.490165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:40:59.490239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:40:59.495527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:40:59.495725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:40:59.495743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:40:59.495912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:40:59.496512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:40:59.496707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:40:59.496896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:40:59.497138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:40:59.497151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:40:59.497347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:40:59.497907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:40:59.497955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:40:59.499598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:40:59.501007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:40:59.501385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1020 12:40:59.586733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:41:03 no-preload-649841 kubelet[713]: I1020 12:41:03.459659     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fec5bad0-dbb2-4040-ada9-4839502e4521-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-48d7f\" (UID: \"fec5bad0-dbb2-4040-ada9-4839502e4521\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48d7f"
	Oct 20 12:41:03 no-preload-649841 kubelet[713]: I1020 12:41:03.459717     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d150dca-8ac0-456c-b923-a90e607f3abd-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kkwfk\" (UID: \"8d150dca-8ac0-456c-b923-a90e607f3abd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk"
	Oct 20 12:41:03 no-preload-649841 kubelet[713]: I1020 12:41:03.832270     713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 12:41:06 no-preload-649841 kubelet[713]: I1020 12:41:06.069469     713 scope.go:117] "RemoveContainer" containerID="944de41e4536bb93c3acac45577e11cf7e79a6dad80d1e6d2c12d0b2a1a053c5"
	Oct 20 12:41:07 no-preload-649841 kubelet[713]: I1020 12:41:07.073507     713 scope.go:117] "RemoveContainer" containerID="944de41e4536bb93c3acac45577e11cf7e79a6dad80d1e6d2c12d0b2a1a053c5"
	Oct 20 12:41:07 no-preload-649841 kubelet[713]: I1020 12:41:07.073707     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:07 no-preload-649841 kubelet[713]: E1020 12:41:07.073897     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:08 no-preload-649841 kubelet[713]: I1020 12:41:08.078651     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:08 no-preload-649841 kubelet[713]: E1020 12:41:08.078870     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:09 no-preload-649841 kubelet[713]: I1020 12:41:09.092321     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48d7f" podStartSLOduration=1.108050483 podStartE2EDuration="6.092301235s" podCreationTimestamp="2025-10-20 12:41:03 +0000 UTC" firstStartedPulling="2025-10-20 12:41:03.741416088 +0000 UTC m=+6.808406424" lastFinishedPulling="2025-10-20 12:41:08.725666842 +0000 UTC m=+11.792657176" observedRunningTime="2025-10-20 12:41:09.091985669 +0000 UTC m=+12.158976030" watchObservedRunningTime="2025-10-20 12:41:09.092301235 +0000 UTC m=+12.159291584"
	Oct 20 12:41:12 no-preload-649841 kubelet[713]: I1020 12:41:12.842055     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:12 no-preload-649841 kubelet[713]: E1020 12:41:12.842281     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: I1020 12:41:25.024055     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: I1020 12:41:25.119761     713 scope.go:117] "RemoveContainer" containerID="1c88fa34eb4d0c899f4d716d4c723f8e0a93885374810dc2fff9a0ab5f42629d"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: I1020 12:41:25.120017     713 scope.go:117] "RemoveContainer" containerID="fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	Oct 20 12:41:25 no-preload-649841 kubelet[713]: E1020 12:41:25.120219     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:31 no-preload-649841 kubelet[713]: I1020 12:41:31.136576     713 scope.go:117] "RemoveContainer" containerID="47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8"
	Oct 20 12:41:32 no-preload-649841 kubelet[713]: I1020 12:41:32.842435     713 scope.go:117] "RemoveContainer" containerID="fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	Oct 20 12:41:32 no-preload-649841 kubelet[713]: E1020 12:41:32.842642     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:44 no-preload-649841 kubelet[713]: I1020 12:41:44.023137     713 scope.go:117] "RemoveContainer" containerID="fa3a1311ff92da00533b4625060bcec60d8c377ebe527e4241c74400cf591614"
	Oct 20 12:41:44 no-preload-649841 kubelet[713]: E1020 12:41:44.023360     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kkwfk_kubernetes-dashboard(8d150dca-8ac0-456c-b923-a90e607f3abd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kkwfk" podUID="8d150dca-8ac0-456c-b923-a90e607f3abd"
	Oct 20 12:41:47 no-preload-649841 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:41:47 no-preload-649841 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:41:47 no-preload-649841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:41:47 no-preload-649841 systemd[1]: kubelet.service: Consumed 1.600s CPU time.
	
	
	==> kubernetes-dashboard [0550ddaaca138162356cb67e6b85432b155df954ed848975ffed2389b56fd043] <==
	2025/10/20 12:41:08 Using namespace: kubernetes-dashboard
	2025/10/20 12:41:08 Using in-cluster config to connect to apiserver
	2025/10/20 12:41:08 Using secret token for csrf signing
	2025/10/20 12:41:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:41:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:41:08 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:41:08 Generating JWE encryption key
	2025/10/20 12:41:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:41:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:41:08 Initializing JWE encryption key from synchronized object
	2025/10/20 12:41:08 Creating in-cluster Sidecar client
	2025/10/20 12:41:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:41:08 Serving insecurely on HTTP port: 9090
	2025/10/20 12:41:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:41:08 Starting overwatch
	
	
	==> storage-provisioner [47543b902bb8bfb4682f05eb9ca15a0f86d22c693594891a3799e6b769feb9c8] <==
	I1020 12:41:00.392515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:41:30.395147       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f430bf4944f7a61d04bf8ebdff50ad02df4215136bd523ead0bef1727db29034] <==
	I1020 12:41:31.181394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:41:31.188582       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:41:31.188622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:41:31.190785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:34.646062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:38.906367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:42.504553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:45.557621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:48.579499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:48.585370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:41:48.585557       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:41:48.585697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-649841_f48f612d-6093-43ab-aab0-672a9db17fa2!
	I1020 12:41:48.585693       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4fe8ee9-f82c-4cee-82a6-30314a2d696f", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-649841_f48f612d-6093-43ab-aab0-672a9db17fa2 became leader
	W1020 12:41:48.587753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:48.593889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:41:48.686526       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-649841_f48f612d-6093-43ab-aab0-672a9db17fa2!
	W1020 12:41:50.596980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:41:50.604175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-649841 -n no-preload-649841
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-649841 -n no-preload-649841: exit status 2 (372.672037ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-649841 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.380081ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-874012 describe deploy/metrics-server -n kube-system: exit status 1 (81.889811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-874012 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-874012
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-874012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7",
	        "Created": "2025-10-20T12:41:38.524846166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253946,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:41:38.568472473Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/hosts",
	        "LogPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7-json.log",
	        "Name": "/default-k8s-diff-port-874012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-874012:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-874012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7",
	                "LowerDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-874012",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-874012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-874012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-874012",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-874012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "41020014f98d1322c3641fb1ccf80bb07d10cae1ddf6ae757a5337515ee910ff",
	            "SandboxKey": "/var/run/docker/netns/41020014f98d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-874012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:e5:13:9c:ed:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "071054924bdb32d774c4d0c0f3c167909dde1b983fbdc59f24f908b03d171adf",
	                    "EndpointID": "3ec34a6e89169b1477db958e63f868eaba976da067c7e20c4217e2ed95752bae",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-874012",
	                        "fbc9ff1c79c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-874012 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-874012 logs -n 25: (1.240645006s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p kubernetes-upgrade-196539                                                                                                                                                                                                                  │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │ 20 Oct 25 12:39 UTC │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:42:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:42:14.612898  263183 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:42:14.613136  263183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:14.613145  263183 out.go:374] Setting ErrFile to fd 2...
	I1020 12:42:14.613149  263183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:14.613410  263183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:42:14.613933  263183 out.go:368] Setting JSON to false
	I1020 12:42:14.615116  263183 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5084,"bootTime":1760959051,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:42:14.615204  263183 start.go:141] virtualization: kvm guest
	I1020 12:42:14.617617  263183 out.go:179] * [embed-certs-907116] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:42:14.619342  263183 notify.go:220] Checking for updates...
	I1020 12:42:14.619349  263183 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:42:14.620948  263183 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:42:14.622371  263183 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:14.623804  263183 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:42:14.625173  263183 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:42:14.626458  263183 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:42:14.628377  263183 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:14.628519  263183 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:14.628696  263183 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:14.628842  263183 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:42:14.654737  263183 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:42:14.654852  263183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:14.711374  263183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:14.701233955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:14.711494  263183 docker.go:318] overlay module found
	I1020 12:42:14.713371  263183 out.go:179] * Using the docker driver based on user configuration
	I1020 12:42:14.714654  263183 start.go:305] selected driver: docker
	I1020 12:42:14.714675  263183 start.go:925] validating driver "docker" against <nil>
	I1020 12:42:14.714686  263183 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:42:14.715311  263183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:14.777765  263183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:14.766894275 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:14.777938  263183 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:42:14.778205  263183 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:42:14.780112  263183 out.go:179] * Using Docker driver with root privileges
	I1020 12:42:14.781325  263183 cni.go:84] Creating CNI manager for ""
	I1020 12:42:14.781391  263183 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:14.781402  263183 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:42:14.781470  263183 start.go:349] cluster config:
	{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:14.782822  263183 out.go:179] * Starting "embed-certs-907116" primary control-plane node in "embed-certs-907116" cluster
	I1020 12:42:14.784144  263183 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:42:14.785365  263183 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:42:14.786576  263183 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:14.786616  263183 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:42:14.786642  263183 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:42:14.786657  263183 cache.go:58] Caching tarball of preloaded images
	I1020 12:42:14.786812  263183 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:42:14.786827  263183 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:42:14.786919  263183 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json ...
	I1020 12:42:14.786938  263183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json: {Name:mk5a4efe560faa4bc64ec4e339c8130dc538a5d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:14.808557  263183 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:42:14.808585  263183 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:42:14.808602  263183 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:42:14.808631  263183 start.go:360] acquireMachinesLock for embed-certs-907116: {Name:mk081262f5d599396d0c232c9311858444bc2e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:42:14.808755  263183 start.go:364] duration metric: took 98.678µs to acquireMachinesLock for "embed-certs-907116"
	I1020 12:42:14.808798  263183 start.go:93] Provisioning new machine with config: &{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:14.808905  263183 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:42:11.794102  258335 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501823946s
	I1020 12:42:11.798377  258335 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:42:11.798551  258335 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1020 12:42:11.798711  258335 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:42:11.798860  258335 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:42:13.411880  258335 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.613507939s
	I1020 12:42:13.937725  258335 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.139254043s
	I1020 12:42:15.800034  258335 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001680671s
	I1020 12:42:15.814573  258335 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:42:15.826053  258335 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:42:15.837153  258335 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:42:15.837522  258335 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-916479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:42:15.846797  258335 kubeadm.go:318] [bootstrap-token] Using token: ws4wu0.nclu899m1xyb6vga
	I1020 12:42:15.848493  258335 out.go:252]   - Configuring RBAC rules ...
	I1020 12:42:15.848636  258335 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:42:15.852291  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:42:15.860989  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:42:15.864245  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:42:15.867136  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:42:15.869912  258335 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:42:11.153570  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:11.153604  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:11.187648  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:42:11.187685  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:42:11.223391  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:11.223421  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:13.825836  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:13.826264  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:13.826322  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:13.826381  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:13.863995  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:13.864017  236655 cri.go:89] found id: ""
	I1020 12:42:13.864026  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:13.864085  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:13.874909  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:13.874974  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:13.927345  236655 cri.go:89] found id: ""
	I1020 12:42:13.927369  236655 logs.go:282] 0 containers: []
	W1020 12:42:13.927379  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:13.927386  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:13.927443  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:13.959597  236655 cri.go:89] found id: ""
	I1020 12:42:13.959626  236655 logs.go:282] 0 containers: []
	W1020 12:42:13.959637  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:13.959649  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:13.959710  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:13.994975  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:13.994996  236655 cri.go:89] found id: ""
	I1020 12:42:13.995005  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:13.995052  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:14.000482  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:14.000588  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:14.032838  236655 cri.go:89] found id: ""
	I1020 12:42:14.032864  236655 logs.go:282] 0 containers: []
	W1020 12:42:14.032874  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:14.032880  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:14.032944  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:14.066245  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:14.066271  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:42:14.066276  236655 cri.go:89] found id: ""
	I1020 12:42:14.066285  236655 logs.go:282] 2 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:42:14.066339  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:14.071305  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:14.077161  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:14.077230  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:14.114057  236655 cri.go:89] found id: ""
	I1020 12:42:14.114088  236655 logs.go:282] 0 containers: []
	W1020 12:42:14.114098  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:14.114103  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:14.114150  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:14.149482  236655 cri.go:89] found id: ""
	I1020 12:42:14.149514  236655 logs.go:282] 0 containers: []
	W1020 12:42:14.149524  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:14.149544  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:14.149559  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:14.209522  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:14.209555  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:14.245216  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:14.245246  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:14.296883  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:14.296919  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:14.325300  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:14.325324  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:14.420308  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:14.420338  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:14.436798  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:14.436826  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:14.496719  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:14.496745  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:14.496760  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:14.533971  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:42:14.534005  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:42:16.207312  258335 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:42:16.624359  258335 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:42:17.206717  258335 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:42:17.207519  258335 kubeadm.go:318] 
	I1020 12:42:17.207636  258335 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:42:17.207648  258335 kubeadm.go:318] 
	I1020 12:42:17.207761  258335 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:42:17.207795  258335 kubeadm.go:318] 
	I1020 12:42:17.207852  258335 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:42:17.207944  258335 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:42:17.208051  258335 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:42:17.208064  258335 kubeadm.go:318] 
	I1020 12:42:17.208145  258335 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:42:17.208155  258335 kubeadm.go:318] 
	I1020 12:42:17.208230  258335 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:42:17.208244  258335 kubeadm.go:318] 
	I1020 12:42:17.208311  258335 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:42:17.208433  258335 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:42:17.208526  258335 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:42:17.208536  258335 kubeadm.go:318] 
	I1020 12:42:17.208653  258335 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:42:17.208761  258335 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:42:17.208795  258335 kubeadm.go:318] 
	I1020 12:42:17.208908  258335 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ws4wu0.nclu899m1xyb6vga \
	I1020 12:42:17.209051  258335 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:42:17.209086  258335 kubeadm.go:318] 	--control-plane 
	I1020 12:42:17.209116  258335 kubeadm.go:318] 
	I1020 12:42:17.209231  258335 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:42:17.209238  258335 kubeadm.go:318] 
	I1020 12:42:17.209353  258335 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ws4wu0.nclu899m1xyb6vga \
	I1020 12:42:17.209502  258335 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:42:17.212795  258335 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:42:17.213011  258335 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:42:17.213039  258335 cni.go:84] Creating CNI manager for ""
	I1020 12:42:17.213048  258335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:17.217680  258335 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:42:14.810925  263183 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:42:14.811173  263183 start.go:159] libmachine.API.Create for "embed-certs-907116" (driver="docker")
	I1020 12:42:14.811206  263183 client.go:168] LocalClient.Create starting
	I1020 12:42:14.811299  263183 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:42:14.811344  263183 main.go:141] libmachine: Decoding PEM data...
	I1020 12:42:14.811369  263183 main.go:141] libmachine: Parsing certificate...
	I1020 12:42:14.811440  263183 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:42:14.811465  263183 main.go:141] libmachine: Decoding PEM data...
	I1020 12:42:14.811491  263183 main.go:141] libmachine: Parsing certificate...
	I1020 12:42:14.811887  263183 cli_runner.go:164] Run: docker network inspect embed-certs-907116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:42:14.829796  263183 cli_runner.go:211] docker network inspect embed-certs-907116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:42:14.829872  263183 network_create.go:284] running [docker network inspect embed-certs-907116] to gather additional debugging logs...
	I1020 12:42:14.829890  263183 cli_runner.go:164] Run: docker network inspect embed-certs-907116
	W1020 12:42:14.846153  263183 cli_runner.go:211] docker network inspect embed-certs-907116 returned with exit code 1
	I1020 12:42:14.846183  263183 network_create.go:287] error running [docker network inspect embed-certs-907116]: docker network inspect embed-certs-907116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-907116 not found
	I1020 12:42:14.846198  263183 network_create.go:289] output of [docker network inspect embed-certs-907116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-907116 not found
	
	** /stderr **
	I1020 12:42:14.846313  263183 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:42:14.864407  263183 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:42:14.865395  263183 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:42:14.866512  263183 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:42:14.867876  263183 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e868e0}
	I1020 12:42:14.867906  263183 network_create.go:124] attempt to create docker network embed-certs-907116 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 12:42:14.867960  263183 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-907116 embed-certs-907116
	I1020 12:42:14.945000  263183 network_create.go:108] docker network embed-certs-907116 192.168.76.0/24 created
	I1020 12:42:14.945044  263183 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-907116" container
	I1020 12:42:14.945127  263183 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:42:14.966398  263183 cli_runner.go:164] Run: docker volume create embed-certs-907116 --label name.minikube.sigs.k8s.io=embed-certs-907116 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:42:14.987903  263183 oci.go:103] Successfully created a docker volume embed-certs-907116
	I1020 12:42:14.987966  263183 cli_runner.go:164] Run: docker run --rm --name embed-certs-907116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-907116 --entrypoint /usr/bin/test -v embed-certs-907116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:42:15.439094  263183 oci.go:107] Successfully prepared a docker volume embed-certs-907116
	I1020 12:42:15.439140  263183 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:15.439164  263183 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:42:15.439222  263183 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-907116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:42:17.219466  258335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:42:17.224312  258335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:42:17.224329  258335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:42:17.239339  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:42:17.509515  258335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:42:17.509616  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-916479 minikube.k8s.io/updated_at=2025_10_20T12_42_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=newest-cni-916479 minikube.k8s.io/primary=true
	I1020 12:42:17.509677  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:17.616338  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:17.634395  258335 ops.go:34] apiserver oom_adj: -16
	I1020 12:42:18.116442  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:18.617026  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:19.117407  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:19.617404  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:20.116810  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:20.616573  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:17.062905  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:17.063371  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:17.063444  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:17.063506  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:17.094197  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:17.094225  236655 cri.go:89] found id: ""
	I1020 12:42:17.094241  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:17.094311  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:17.098437  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:17.098487  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:17.127195  236655 cri.go:89] found id: ""
	I1020 12:42:17.127223  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.127233  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:17.127241  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:17.127309  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:17.157622  236655 cri.go:89] found id: ""
	I1020 12:42:17.157649  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.157658  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:17.157665  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:17.157728  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:17.185165  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:17.185191  236655 cri.go:89] found id: ""
	I1020 12:42:17.185201  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:17.185248  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:17.189243  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:17.189313  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:17.221549  236655 cri.go:89] found id: ""
	I1020 12:42:17.221574  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.221581  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:17.221586  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:17.221629  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:17.253344  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:17.253367  236655 cri.go:89] found id: ""
	I1020 12:42:17.253377  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:17.253432  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:17.257573  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:17.257658  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:17.289319  236655 cri.go:89] found id: ""
	I1020 12:42:17.289351  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.289362  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:17.289369  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:17.289425  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:17.323210  236655 cri.go:89] found id: ""
	I1020 12:42:17.323238  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.323248  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:17.323260  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:17.323275  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:17.400751  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:17.400797  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:17.400814  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:17.442468  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:17.442499  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:17.509371  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:17.509408  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:17.545130  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:17.545158  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:17.623125  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:17.623162  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:17.663106  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:17.663140  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:17.755914  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:17.755947  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:20.272922  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:20.273801  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:20.273862  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:20.273927  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:20.308840  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:20.308867  236655 cri.go:89] found id: ""
	I1020 12:42:20.308877  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:20.308939  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:20.313298  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:20.313368  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:20.346898  236655 cri.go:89] found id: ""
	I1020 12:42:20.346925  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.346934  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:20.346943  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:20.347001  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:20.383379  236655 cri.go:89] found id: ""
	I1020 12:42:20.383402  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.383411  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:20.383418  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:20.383470  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:20.418045  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:20.418071  236655 cri.go:89] found id: ""
	I1020 12:42:20.418080  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:20.418130  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:20.422785  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:20.422903  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:20.457488  236655 cri.go:89] found id: ""
	I1020 12:42:20.457513  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.457524  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:20.457531  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:20.457589  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:20.487727  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:20.487751  236655 cri.go:89] found id: ""
	I1020 12:42:20.487761  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:20.487853  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:20.492928  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:20.492998  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:20.522709  236655 cri.go:89] found id: ""
	I1020 12:42:20.522743  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.522752  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:20.522760  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:20.522872  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:20.553404  236655 cri.go:89] found id: ""
	I1020 12:42:20.553426  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.553436  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:20.553463  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:20.553479  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:20.673393  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:20.673446  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:20.693218  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:20.693261  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:20.772032  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:20.772061  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:20.772078  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:20.836385  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:20.836426  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:20.931525  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:20.931570  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:20.970665  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:20.970695  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:21.064386  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:21.064426  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:21.117395  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:21.204702  258335 kubeadm.go:1113] duration metric: took 3.695131505s to wait for elevateKubeSystemPrivileges
	I1020 12:42:21.204735  258335 kubeadm.go:402] duration metric: took 15.341256879s to StartCluster
	I1020 12:42:21.204756  258335 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:21.204839  258335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:21.206160  258335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:21.206390  258335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:42:21.206461  258335 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:21.206580  258335 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:42:21.206678  258335 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-916479"
	I1020 12:42:21.206694  258335 addons.go:69] Setting default-storageclass=true in profile "newest-cni-916479"
	I1020 12:42:21.206717  258335 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-916479"
	I1020 12:42:21.206719  258335 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-916479"
	I1020 12:42:21.206721  258335 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:21.206757  258335 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:21.207170  258335 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:21.207330  258335 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:21.210992  258335 out.go:179] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	Oct 20 12:42:09 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:09.544429202Z" level=info msg="Started container" PID=1831 containerID=855a71eb087dd69b03413ffd203b235ffef81e5df634e08d25d3eaaf8b32f3fe description=kube-system/storage-provisioner/storage-provisioner id=b3424701-f001-46ac-b024-80d72c75f3a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c058cde49b89ca4e8a95b1924aa51aa599600f4dd09f68238302d108bcd4137d
	Oct 20 12:42:09 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:09.545926313Z" level=info msg="Started container" PID=1832 containerID=795907bd876ba320965a69361e57e1f54014de51b8c2efb14d164940a11c589f description=kube-system/coredns-66bc5c9577-vd5sd/coredns id=fdd10a39-11e0-43b0-88ce-543880e6eb90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ffc078b60df9c48868577614644122bb073de64fb6cc84b6142e0d1724cc4c8
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.539761402Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d3928c9e-78a6-400a-bb4a-c6baffeddc70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.539868457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.54555532Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:269aee85d9145ef78b07c8ddab1cf012bf8bd7d8fbd356750ad01f6b267c43d2 UID:13ae6f85-639e-44b4-aa3b-abfc21397973 NetNS:/var/run/netns/861b5f63-cc20-4c95-b29d-edeeadacd59e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128750}] Aliases:map[]}"
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.545590228Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.556935713Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:269aee85d9145ef78b07c8ddab1cf012bf8bd7d8fbd356750ad01f6b267c43d2 UID:13ae6f85-639e-44b4-aa3b-abfc21397973 NetNS:/var/run/netns/861b5f63-cc20-4c95-b29d-edeeadacd59e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128750}] Aliases:map[]}"
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.557075912Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.557716028Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.558593782Z" level=info msg="Ran pod sandbox 269aee85d9145ef78b07c8ddab1cf012bf8bd7d8fbd356750ad01f6b267c43d2 with infra container: default/busybox/POD" id=d3928c9e-78a6-400a-bb4a-c6baffeddc70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.5596858Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0bab1be-e3f2-43da-a0be-304035b7b7f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.559872303Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b0bab1be-e3f2-43da-a0be-304035b7b7f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.55992014Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b0bab1be-e3f2-43da-a0be-304035b7b7f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.560661375Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1024bd60-75c4-47d7-9e6b-684c7bbbd19c name=/runtime.v1.ImageService/PullImage
	Oct 20 12:42:12 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:12.56623092Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.042751061Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1024bd60-75c4-47d7-9e6b-684c7bbbd19c name=/runtime.v1.ImageService/PullImage
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.043566944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e195ae31-6f94-49ae-9067-7b7f8dd2d0a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.045091929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6b927d92-2030-4c1f-8c52-64377d928bbf name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.051983309Z" level=info msg="Creating container: default/busybox/busybox" id=b551f6e7-55e4-402b-9dae-32d7f02e661b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.052097169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.05623035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.056660547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.097013341Z" level=info msg="Created container 50aa61159b19dee559856640c37d269a8a6ab98653c14681865851d0faf6deb7: default/busybox/busybox" id=b551f6e7-55e4-402b-9dae-32d7f02e661b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.098040009Z" level=info msg="Starting container: 50aa61159b19dee559856640c37d269a8a6ab98653c14681865851d0faf6deb7" id=4ab3a529-338f-4d31-90af-f998f3ccfa7c name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:14 default-k8s-diff-port-874012 crio[781]: time="2025-10-20T12:42:14.100443137Z" level=info msg="Started container" PID=1909 containerID=50aa61159b19dee559856640c37d269a8a6ab98653c14681865851d0faf6deb7 description=default/busybox/busybox id=4ab3a529-338f-4d31-90af-f998f3ccfa7c name=/runtime.v1.RuntimeService/StartContainer sandboxID=269aee85d9145ef78b07c8ddab1cf012bf8bd7d8fbd356750ad01f6b267c43d2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	50aa61159b19d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   269aee85d9145       busybox                                                default
	795907bd876ba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   0ffc078b60df9       coredns-66bc5c9577-vd5sd                               kube-system
	855a71eb087dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   c058cde49b89c       storage-provisioner                                    kube-system
	6ddf412fa9c5b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   2cf873820ea7c       kindnet-jrv62                                          kube-system
	b9a7c3f3f7535       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   f4bb4e01476b2       kube-proxy-bbw6k                                       kube-system
	4f011466ea0fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   7e780e32204e7       etcd-default-k8s-diff-port-874012                      kube-system
	ed967cacaedc0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   a97fad8627e4a       kube-apiserver-default-k8s-diff-port-874012            kube-system
	14ccecac43a81       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   63cd9343ae458       kube-controller-manager-default-k8s-diff-port-874012   kube-system
	67f1a087270a9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   c464fcfec8101       kube-scheduler-default-k8s-diff-port-874012            kube-system
	
	
	==> coredns [795907bd876ba320965a69361e57e1f54014de51b8c2efb14d164940a11c589f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50175 - 62390 "HINFO IN 804413408642246061.5073507589175113921. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015453881s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-874012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-874012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=default-k8s-diff-port-874012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_41_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-874012
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:42:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:42:09 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:42:09 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:42:09 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:42:09 +0000   Mon, 20 Oct 2025 12:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-874012
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2780d33f-1af5-4f46-b321-ab4699252d20
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-vd5sd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-874012                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-jrv62                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-874012             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-874012    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-bbw6k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-874012             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-874012 event: Registered Node default-k8s-diff-port-874012 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-874012 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [4f011466ea0fb6317bf2829830b75ca42ba8b9c933c647bf16b2c7e7d5ba1c8b] <==
	{"level":"warn","ts":"2025-10-20T12:41:49.694683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.701979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.708996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.715835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.722174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.730056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.736514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.744082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.750469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.756889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.764375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.771308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.779126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.792815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.799867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.806595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.813840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.825039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.832626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.840045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:41:49.894326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35598","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:42:19.770884Z","caller":"traceutil/trace.go:172","msg":"trace[8768181] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:449; }","duration":"143.348046ms","start":"2025-10-20T12:42:19.627515Z","end":"2025-10-20T12:42:19.770863Z","steps":["trace[8768181] 'read index received'  (duration: 143.34043ms)","trace[8768181] 'applied index is now lower than readState.Index'  (duration: 6.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:19.771029Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.486471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:19.771079Z","caller":"traceutil/trace.go:172","msg":"trace[463864850] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:433; }","duration":"143.589043ms","start":"2025-10-20T12:42:19.627481Z","end":"2025-10-20T12:42:19.771070Z","steps":["trace[463864850] 'agreement among raft nodes before linearized reading'  (duration: 143.4635ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:19.771179Z","caller":"traceutil/trace.go:172","msg":"trace[1271351084] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"146.674172ms","start":"2025-10-20T12:42:19.624421Z","end":"2025-10-20T12:42:19.771096Z","steps":["trace[1271351084] 'process raft request'  (duration: 146.518522ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:42:22 up  1:24,  0 user,  load average: 2.63, 3.19, 2.09
	Linux default-k8s-diff-port-874012 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ddf412fa9c5bf3bbcb1040bbcc40c635b711b8c936fb82d10e2dd7b080d8543] <==
	I1020 12:41:58.767671       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:41:58.768200       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1020 12:41:58.769303       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:41:58.769465       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:41:58.769500       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:41:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:41:58.968950       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:41:58.969070       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:41:58.969090       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:41:58.969249       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:41:59.272993       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:41:59.273024       1 metrics.go:72] Registering metrics
	I1020 12:41:59.273113       1 controller.go:711] "Syncing nftables rules"
	I1020 12:42:08.969876       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:42:08.969961       1 main.go:301] handling current node
	I1020 12:42:18.970847       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:42:18.970901       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ed967cacaedc0105d52f9d4d529db0f688065484a04be9c37cd8b66d0fafd9a1] <==
	I1020 12:41:50.395305       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:41:50.395694       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:41:50.395760       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1020 12:41:50.402050       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 12:41:50.409688       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:41:50.412973       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:41:50.598192       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:41:51.299949       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:41:51.304386       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:41:51.304404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:41:51.840309       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:41:51.884259       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:41:52.004602       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:41:52.011429       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1020 12:41:52.012542       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:41:52.017097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:41:52.317529       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:41:53.095560       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:41:53.107006       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:41:53.115984       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:41:57.920798       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:41:57.930173       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:41:58.020061       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:41:58.118295       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1020 12:42:20.344139       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:53550: use of closed network connection
	
	
	==> kube-controller-manager [14ccecac43a813497d0e8de6e8db071b3542cb0d9cd5cf0f7aa53c996cfe06dd] <==
	I1020 12:41:57.285485       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-874012" podCIDRs=["10.244.0.0/24"]
	I1020 12:41:57.289376       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:41:57.291539       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 12:41:57.298835       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:41:57.315614       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 12:41:57.315673       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 12:41:57.315673       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:41:57.315701       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:41:57.315709       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:41:57.316817       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:41:57.316860       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:41:57.316856       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:41:57.317205       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 12:41:57.317318       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 12:41:57.317357       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:41:57.317489       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 12:41:57.317505       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:41:57.319191       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:41:57.321267       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:41:57.326329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:41:57.329627       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:41:57.329660       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:41:57.337401       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:41:57.341697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:42:12.267733       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b9a7c3f3f7535c64f22e6f9fe46c480673c5209f0633da93bd8d2c2165f805b9] <==
	I1020 12:41:58.554220       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:41:58.627030       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:41:58.728300       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:41:58.728335       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1020 12:41:58.728414       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:41:58.757027       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:41:58.757111       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:41:58.764591       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:41:58.765074       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:41:58.765109       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:41:58.766666       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:41:58.766854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:41:58.767004       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:41:58.767016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:41:58.767247       1 config.go:200] "Starting service config controller"
	I1020 12:41:58.767259       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:41:58.767297       1 config.go:309] "Starting node config controller"
	I1020 12:41:58.767304       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:41:58.767310       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:41:58.868886       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:41:58.868947       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:41:58.869287       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [67f1a087270a96658e8708adc18c7cc65376baf00d711c7529ff71e9819eea15] <==
	E1020 12:41:50.352199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:41:50.352252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:41:50.352248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:41:50.352591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:41:50.352687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:41:50.352850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:41:50.352689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:41:50.352990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:41:50.353267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:41:50.353306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:41:51.183032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:41:51.209388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:41:51.236106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:41:51.237094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:41:51.273253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:41:51.302483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:41:51.324819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:41:51.367921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:41:51.546264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:41:51.551580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:41:51.609928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:41:51.611855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:41:51.627160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1020 12:41:51.648382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1020 12:41:54.147706       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:41:54 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:54.011111    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-874012" podStartSLOduration=1.011088947 podStartE2EDuration="1.011088947s" podCreationTimestamp="2025-10-20 12:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:41:54.008925772 +0000 UTC m=+1.152910572" watchObservedRunningTime="2025-10-20 12:41:54.011088947 +0000 UTC m=+1.155073744"
	Oct 20 12:41:54 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:54.022401    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-874012" podStartSLOduration=1.022383068 podStartE2EDuration="1.022383068s" podCreationTimestamp="2025-10-20 12:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:41:54.022315014 +0000 UTC m=+1.166299815" watchObservedRunningTime="2025-10-20 12:41:54.022383068 +0000 UTC m=+1.166367866"
	Oct 20 12:41:54 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:54.033028    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-874012" podStartSLOduration=1.033004117 podStartE2EDuration="1.033004117s" podCreationTimestamp="2025-10-20 12:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:41:54.032735381 +0000 UTC m=+1.176720182" watchObservedRunningTime="2025-10-20 12:41:54.033004117 +0000 UTC m=+1.176988917"
	Oct 20 12:41:54 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:54.042098    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-874012" podStartSLOduration=1.042077832 podStartE2EDuration="1.042077832s" podCreationTimestamp="2025-10-20 12:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:41:54.041976278 +0000 UTC m=+1.185961077" watchObservedRunningTime="2025-10-20 12:41:54.042077832 +0000 UTC m=+1.186062632"
	Oct 20 12:41:57 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:57.382814    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 20 12:41:57 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:57.383508    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172200    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fc9fff8-30ab-4d81-868c-9d06b36040de-xtables-lock\") pod \"kube-proxy-bbw6k\" (UID: \"5fc9fff8-30ab-4d81-868c-9d06b36040de\") " pod="kube-system/kube-proxy-bbw6k"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172241    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fc9fff8-30ab-4d81-868c-9d06b36040de-lib-modules\") pod \"kube-proxy-bbw6k\" (UID: \"5fc9fff8-30ab-4d81-868c-9d06b36040de\") " pod="kube-system/kube-proxy-bbw6k"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172270    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e844105-d285-40a8-8cf7-30221c1e2034-lib-modules\") pod \"kindnet-jrv62\" (UID: \"0e844105-d285-40a8-8cf7-30221c1e2034\") " pod="kube-system/kindnet-jrv62"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172299    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv6fl\" (UniqueName: \"kubernetes.io/projected/5fc9fff8-30ab-4d81-868c-9d06b36040de-kube-api-access-hv6fl\") pod \"kube-proxy-bbw6k\" (UID: \"5fc9fff8-30ab-4d81-868c-9d06b36040de\") " pod="kube-system/kube-proxy-bbw6k"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172393    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5fc9fff8-30ab-4d81-868c-9d06b36040de-kube-proxy\") pod \"kube-proxy-bbw6k\" (UID: \"5fc9fff8-30ab-4d81-868c-9d06b36040de\") " pod="kube-system/kube-proxy-bbw6k"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172438    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0e844105-d285-40a8-8cf7-30221c1e2034-cni-cfg\") pod \"kindnet-jrv62\" (UID: \"0e844105-d285-40a8-8cf7-30221c1e2034\") " pod="kube-system/kindnet-jrv62"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172466    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e844105-d285-40a8-8cf7-30221c1e2034-xtables-lock\") pod \"kindnet-jrv62\" (UID: \"0e844105-d285-40a8-8cf7-30221c1e2034\") " pod="kube-system/kindnet-jrv62"
	Oct 20 12:41:58 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:58.172489    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnmp\" (UniqueName: \"kubernetes.io/projected/0e844105-d285-40a8-8cf7-30221c1e2034-kube-api-access-hbnmp\") pod \"kindnet-jrv62\" (UID: \"0e844105-d285-40a8-8cf7-30221c1e2034\") " pod="kube-system/kindnet-jrv62"
	Oct 20 12:41:59 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:59.001756    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jrv62" podStartSLOduration=1.001734528 podStartE2EDuration="1.001734528s" podCreationTimestamp="2025-10-20 12:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:41:59.001471018 +0000 UTC m=+6.145455840" watchObservedRunningTime="2025-10-20 12:41:59.001734528 +0000 UTC m=+6.145719328"
	Oct 20 12:41:59 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:41:59.015141    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bbw6k" podStartSLOduration=1.015119146 podStartE2EDuration="1.015119146s" podCreationTimestamp="2025-10-20 12:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:41:59.015067391 +0000 UTC m=+6.159052192" watchObservedRunningTime="2025-10-20 12:41:59.015119146 +0000 UTC m=+6.159103950"
	Oct 20 12:42:09 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:09.156801    1307 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 20 12:42:09 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:09.255746    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfwdl\" (UniqueName: \"kubernetes.io/projected/72e24caa-a3c3-45b6-bcf6-42b600c08fce-kube-api-access-dfwdl\") pod \"coredns-66bc5c9577-vd5sd\" (UID: \"72e24caa-a3c3-45b6-bcf6-42b600c08fce\") " pod="kube-system/coredns-66bc5c9577-vd5sd"
	Oct 20 12:42:09 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:09.255821    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72e24caa-a3c3-45b6-bcf6-42b600c08fce-config-volume\") pod \"coredns-66bc5c9577-vd5sd\" (UID: \"72e24caa-a3c3-45b6-bcf6-42b600c08fce\") " pod="kube-system/coredns-66bc5c9577-vd5sd"
	Oct 20 12:42:09 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:09.255862    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c07250e9-4c89-414f-94b6-af63b9e5d71d-tmp\") pod \"storage-provisioner\" (UID: \"c07250e9-4c89-414f-94b6-af63b9e5d71d\") " pod="kube-system/storage-provisioner"
	Oct 20 12:42:09 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:09.255893    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtbc\" (UniqueName: \"kubernetes.io/projected/c07250e9-4c89-414f-94b6-af63b9e5d71d-kube-api-access-mbtbc\") pod \"storage-provisioner\" (UID: \"c07250e9-4c89-414f-94b6-af63b9e5d71d\") " pod="kube-system/storage-provisioner"
	Oct 20 12:42:10 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:10.029274    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vd5sd" podStartSLOduration=12.02925291 podStartE2EDuration="12.02925291s" podCreationTimestamp="2025-10-20 12:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:10.028848951 +0000 UTC m=+17.172833748" watchObservedRunningTime="2025-10-20 12:42:10.02925291 +0000 UTC m=+17.173237711"
	Oct 20 12:42:10 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:10.039905    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.039884743 podStartE2EDuration="11.039884743s" podCreationTimestamp="2025-10-20 12:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:10.039557381 +0000 UTC m=+17.183542194" watchObservedRunningTime="2025-10-20 12:42:10.039884743 +0000 UTC m=+17.183869546"
	Oct 20 12:42:12 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:12.275807    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mwd\" (UniqueName: \"kubernetes.io/projected/13ae6f85-639e-44b4-aa3b-abfc21397973-kube-api-access-76mwd\") pod \"busybox\" (UID: \"13ae6f85-639e-44b4-aa3b-abfc21397973\") " pod="default/busybox"
	Oct 20 12:42:15 default-k8s-diff-port-874012 kubelet[1307]: I1020 12:42:15.044094    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.559858598 podStartE2EDuration="3.04406932s" podCreationTimestamp="2025-10-20 12:42:12 +0000 UTC" firstStartedPulling="2025-10-20 12:42:12.560241194 +0000 UTC m=+19.704225977" lastFinishedPulling="2025-10-20 12:42:14.044451917 +0000 UTC m=+21.188436699" observedRunningTime="2025-10-20 12:42:15.043362407 +0000 UTC m=+22.187347208" watchObservedRunningTime="2025-10-20 12:42:15.04406932 +0000 UTC m=+22.188054122"
	
	
	==> storage-provisioner [855a71eb087dd69b03413ffd203b235ffef81e5df634e08d25d3eaaf8b32f3fe] <==
	I1020 12:42:09.559663       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:42:09.568287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:42:09.568330       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:42:09.571014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:09.576064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:42:09.576195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:42:09.576473       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-874012_d420e194-d038-4c38-aa3b-f16235b442ee!
	I1020 12:42:09.576628       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"992be66e-ad31-4768-ae4d-5fe58274f9ef", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-874012_d420e194-d038-4c38-aa3b-f16235b442ee became leader
	W1020 12:42:09.577926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:09.584039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:42:09.677542       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-874012_d420e194-d038-4c38-aa3b-f16235b442ee!
	W1020 12:42:11.587673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:11.593028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:13.596694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:13.601221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:15.604529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:15.609072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:17.613034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:17.618953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:19.622250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:19.772287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:21.776106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:21.780734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.133493ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-916479
helpers_test.go:243: (dbg) docker inspect newest-cni-916479:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef",
	        "Created": "2025-10-20T12:42:01.570705232Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 259303,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:42:01.614345305Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/hosts",
	        "LogPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef-json.log",
	        "Name": "/newest-cni-916479",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-916479:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-916479",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef",
	                "LowerDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/merged",
	                "UpperDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/diff",
	                "WorkDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-916479",
	                "Source": "/var/lib/docker/volumes/newest-cni-916479/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-916479",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-916479",
	                "name.minikube.sigs.k8s.io": "newest-cni-916479",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32ea8f34b402efec17f6098f3155c1c693bebc070c448accae9dc0c27643c6d5",
	            "SandboxKey": "/var/run/docker/netns/32ea8f34b402",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-916479": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:95:a0:f8:cb:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0498bf893ff4ca5c840f9bd85d2a414a351b283489487091a509c21cecdac157",
	                    "EndpointID": "a7efe747cc22788dd439631e7217df978d365aade4ea4f1dd13fed390d9afe6d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-916479",
	                        "f767c4ce93d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-916479 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-916479 logs -n 25: (1.03664619s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-196539    │ jenkins │ v1.37.0 │ 20 Oct 25 12:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-384253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p old-k8s-version-384253 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ addons  │ enable metrics-server -p no-preload-649841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │                     │
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:42:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:42:14.612898  263183 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:42:14.613136  263183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:14.613145  263183 out.go:374] Setting ErrFile to fd 2...
	I1020 12:42:14.613149  263183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:14.613410  263183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:42:14.613933  263183 out.go:368] Setting JSON to false
	I1020 12:42:14.615116  263183 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5084,"bootTime":1760959051,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:42:14.615204  263183 start.go:141] virtualization: kvm guest
	I1020 12:42:14.617617  263183 out.go:179] * [embed-certs-907116] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:42:14.619342  263183 notify.go:220] Checking for updates...
	I1020 12:42:14.619349  263183 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:42:14.620948  263183 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:42:14.622371  263183 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:14.623804  263183 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:42:14.625173  263183 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:42:14.626458  263183 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:42:14.628377  263183 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:14.628519  263183 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:14.628696  263183 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:14.628842  263183 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:42:14.654737  263183 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:42:14.654852  263183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:14.711374  263183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:14.701233955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:14.711494  263183 docker.go:318] overlay module found
	I1020 12:42:14.713371  263183 out.go:179] * Using the docker driver based on user configuration
	I1020 12:42:14.714654  263183 start.go:305] selected driver: docker
	I1020 12:42:14.714675  263183 start.go:925] validating driver "docker" against <nil>
	I1020 12:42:14.714686  263183 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:42:14.715311  263183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:14.777765  263183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:14.766894275 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:14.777938  263183 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:42:14.778205  263183 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:42:14.780112  263183 out.go:179] * Using Docker driver with root privileges
	I1020 12:42:14.781325  263183 cni.go:84] Creating CNI manager for ""
	I1020 12:42:14.781391  263183 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:14.781402  263183 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:42:14.781470  263183 start.go:349] cluster config:
	{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:14.782822  263183 out.go:179] * Starting "embed-certs-907116" primary control-plane node in "embed-certs-907116" cluster
	I1020 12:42:14.784144  263183 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:42:14.785365  263183 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:42:14.786576  263183 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:14.786616  263183 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:42:14.786642  263183 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:42:14.786657  263183 cache.go:58] Caching tarball of preloaded images
	I1020 12:42:14.786812  263183 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:42:14.786827  263183 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:42:14.786919  263183 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json ...
	I1020 12:42:14.786938  263183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json: {Name:mk5a4efe560faa4bc64ec4e339c8130dc538a5d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:14.808557  263183 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:42:14.808585  263183 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:42:14.808602  263183 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:42:14.808631  263183 start.go:360] acquireMachinesLock for embed-certs-907116: {Name:mk081262f5d599396d0c232c9311858444bc2e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:42:14.808755  263183 start.go:364] duration metric: took 98.678µs to acquireMachinesLock for "embed-certs-907116"
	I1020 12:42:14.808798  263183 start.go:93] Provisioning new machine with config: &{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:14.808905  263183 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:42:11.794102  258335 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501823946s
	I1020 12:42:11.798377  258335 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:42:11.798551  258335 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1020 12:42:11.798711  258335 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:42:11.798860  258335 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:42:13.411880  258335 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.613507939s
	I1020 12:42:13.937725  258335 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.139254043s
	I1020 12:42:15.800034  258335 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001680671s
	I1020 12:42:15.814573  258335 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:42:15.826053  258335 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:42:15.837153  258335 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:42:15.837522  258335 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-916479 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:42:15.846797  258335 kubeadm.go:318] [bootstrap-token] Using token: ws4wu0.nclu899m1xyb6vga
	I1020 12:42:15.848493  258335 out.go:252]   - Configuring RBAC rules ...
	I1020 12:42:15.848636  258335 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:42:15.852291  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:42:15.860989  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:42:15.864245  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:42:15.867136  258335 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:42:15.869912  258335 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:42:11.153570  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:11.153604  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:11.187648  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:42:11.187685  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:42:11.223391  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:11.223421  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:13.825836  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:13.826264  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:13.826322  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:13.826381  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:13.863995  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:13.864017  236655 cri.go:89] found id: ""
	I1020 12:42:13.864026  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:13.864085  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:13.874909  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:13.874974  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:13.927345  236655 cri.go:89] found id: ""
	I1020 12:42:13.927369  236655 logs.go:282] 0 containers: []
	W1020 12:42:13.927379  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:13.927386  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:13.927443  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:13.959597  236655 cri.go:89] found id: ""
	I1020 12:42:13.959626  236655 logs.go:282] 0 containers: []
	W1020 12:42:13.959637  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:13.959649  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:13.959710  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:13.994975  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:13.994996  236655 cri.go:89] found id: ""
	I1020 12:42:13.995005  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:13.995052  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:14.000482  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:14.000588  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:14.032838  236655 cri.go:89] found id: ""
	I1020 12:42:14.032864  236655 logs.go:282] 0 containers: []
	W1020 12:42:14.032874  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:14.032880  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:14.032944  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:14.066245  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:14.066271  236655 cri.go:89] found id: "f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:42:14.066276  236655 cri.go:89] found id: ""
	I1020 12:42:14.066285  236655 logs.go:282] 2 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72]
	I1020 12:42:14.066339  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:14.071305  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:14.077161  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:14.077230  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:14.114057  236655 cri.go:89] found id: ""
	I1020 12:42:14.114088  236655 logs.go:282] 0 containers: []
	W1020 12:42:14.114098  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:14.114103  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:14.114150  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:14.149482  236655 cri.go:89] found id: ""
	I1020 12:42:14.149514  236655 logs.go:282] 0 containers: []
	W1020 12:42:14.149524  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:14.149544  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:14.149559  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:14.209522  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:14.209555  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:14.245216  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:14.245246  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:14.296883  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:14.296919  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:14.325300  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:14.325324  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:14.420308  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:14.420338  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:14.436798  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:14.436826  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:14.496719  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:14.496745  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:14.496760  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:14.533971  236655 logs.go:123] Gathering logs for kube-controller-manager [f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72] ...
	I1020 12:42:14.534005  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f440ed7023e6c90497f990d973d4fe2195a62c7f2740370e9b69fc4a87828e72"
	I1020 12:42:16.207312  258335 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:42:16.624359  258335 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:42:17.206717  258335 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:42:17.207519  258335 kubeadm.go:318] 
	I1020 12:42:17.207636  258335 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:42:17.207648  258335 kubeadm.go:318] 
	I1020 12:42:17.207761  258335 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:42:17.207795  258335 kubeadm.go:318] 
	I1020 12:42:17.207852  258335 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:42:17.207944  258335 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:42:17.208051  258335 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:42:17.208064  258335 kubeadm.go:318] 
	I1020 12:42:17.208145  258335 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:42:17.208155  258335 kubeadm.go:318] 
	I1020 12:42:17.208230  258335 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:42:17.208244  258335 kubeadm.go:318] 
	I1020 12:42:17.208311  258335 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:42:17.208433  258335 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:42:17.208526  258335 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:42:17.208536  258335 kubeadm.go:318] 
	I1020 12:42:17.208653  258335 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:42:17.208761  258335 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:42:17.208795  258335 kubeadm.go:318] 
	I1020 12:42:17.208908  258335 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ws4wu0.nclu899m1xyb6vga \
	I1020 12:42:17.209051  258335 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:42:17.209086  258335 kubeadm.go:318] 	--control-plane 
	I1020 12:42:17.209116  258335 kubeadm.go:318] 
	I1020 12:42:17.209231  258335 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:42:17.209238  258335 kubeadm.go:318] 
	I1020 12:42:17.209353  258335 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ws4wu0.nclu899m1xyb6vga \
	I1020 12:42:17.209502  258335 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:42:17.212795  258335 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:42:17.213011  258335 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:42:17.213039  258335 cni.go:84] Creating CNI manager for ""
	I1020 12:42:17.213048  258335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:17.217680  258335 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:42:14.810925  263183 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:42:14.811173  263183 start.go:159] libmachine.API.Create for "embed-certs-907116" (driver="docker")
	I1020 12:42:14.811206  263183 client.go:168] LocalClient.Create starting
	I1020 12:42:14.811299  263183 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:42:14.811344  263183 main.go:141] libmachine: Decoding PEM data...
	I1020 12:42:14.811369  263183 main.go:141] libmachine: Parsing certificate...
	I1020 12:42:14.811440  263183 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:42:14.811465  263183 main.go:141] libmachine: Decoding PEM data...
	I1020 12:42:14.811491  263183 main.go:141] libmachine: Parsing certificate...
	I1020 12:42:14.811887  263183 cli_runner.go:164] Run: docker network inspect embed-certs-907116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:42:14.829796  263183 cli_runner.go:211] docker network inspect embed-certs-907116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:42:14.829872  263183 network_create.go:284] running [docker network inspect embed-certs-907116] to gather additional debugging logs...
	I1020 12:42:14.829890  263183 cli_runner.go:164] Run: docker network inspect embed-certs-907116
	W1020 12:42:14.846153  263183 cli_runner.go:211] docker network inspect embed-certs-907116 returned with exit code 1
	I1020 12:42:14.846183  263183 network_create.go:287] error running [docker network inspect embed-certs-907116]: docker network inspect embed-certs-907116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-907116 not found
	I1020 12:42:14.846198  263183 network_create.go:289] output of [docker network inspect embed-certs-907116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-907116 not found
	
	** /stderr **
	I1020 12:42:14.846313  263183 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:42:14.864407  263183 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:42:14.865395  263183 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:42:14.866512  263183 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:42:14.867876  263183 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e868e0}
	I1020 12:42:14.867906  263183 network_create.go:124] attempt to create docker network embed-certs-907116 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 12:42:14.867960  263183 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-907116 embed-certs-907116
	I1020 12:42:14.945000  263183 network_create.go:108] docker network embed-certs-907116 192.168.76.0/24 created
	I1020 12:42:14.945044  263183 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-907116" container
	I1020 12:42:14.945127  263183 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:42:14.966398  263183 cli_runner.go:164] Run: docker volume create embed-certs-907116 --label name.minikube.sigs.k8s.io=embed-certs-907116 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:42:14.987903  263183 oci.go:103] Successfully created a docker volume embed-certs-907116
	I1020 12:42:14.987966  263183 cli_runner.go:164] Run: docker run --rm --name embed-certs-907116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-907116 --entrypoint /usr/bin/test -v embed-certs-907116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:42:15.439094  263183 oci.go:107] Successfully prepared a docker volume embed-certs-907116
	I1020 12:42:15.439140  263183 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:15.439164  263183 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:42:15.439222  263183 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-907116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:42:17.219466  258335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:42:17.224312  258335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:42:17.224329  258335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:42:17.239339  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:42:17.509515  258335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:42:17.509616  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-916479 minikube.k8s.io/updated_at=2025_10_20T12_42_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=newest-cni-916479 minikube.k8s.io/primary=true
	I1020 12:42:17.509677  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:17.616338  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:17.634395  258335 ops.go:34] apiserver oom_adj: -16
	I1020 12:42:18.116442  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:18.617026  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:19.117407  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:19.617404  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:20.116810  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:20.616573  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:17.062905  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:17.063371  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:17.063444  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:17.063506  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:17.094197  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:17.094225  236655 cri.go:89] found id: ""
	I1020 12:42:17.094241  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:17.094311  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:17.098437  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:17.098487  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:17.127195  236655 cri.go:89] found id: ""
	I1020 12:42:17.127223  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.127233  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:17.127241  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:17.127309  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:17.157622  236655 cri.go:89] found id: ""
	I1020 12:42:17.157649  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.157658  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:17.157665  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:17.157728  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:17.185165  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:17.185191  236655 cri.go:89] found id: ""
	I1020 12:42:17.185201  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:17.185248  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:17.189243  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:17.189313  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:17.221549  236655 cri.go:89] found id: ""
	I1020 12:42:17.221574  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.221581  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:17.221586  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:17.221629  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:17.253344  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:17.253367  236655 cri.go:89] found id: ""
	I1020 12:42:17.253377  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:17.253432  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:17.257573  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:17.257658  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:17.289319  236655 cri.go:89] found id: ""
	I1020 12:42:17.289351  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.289362  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:17.289369  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:17.289425  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:17.323210  236655 cri.go:89] found id: ""
	I1020 12:42:17.323238  236655 logs.go:282] 0 containers: []
	W1020 12:42:17.323248  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:17.323260  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:17.323275  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:17.400751  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:17.400797  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:17.400814  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:17.442468  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:17.442499  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:17.509371  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:17.509408  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:17.545130  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:17.545158  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:17.623125  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:17.623162  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:17.663106  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:17.663140  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:17.755914  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:17.755947  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:20.272922  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:20.273801  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:20.273862  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:20.273927  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:20.308840  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:20.308867  236655 cri.go:89] found id: ""
	I1020 12:42:20.308877  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:20.308939  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:20.313298  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:20.313368  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:20.346898  236655 cri.go:89] found id: ""
	I1020 12:42:20.346925  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.346934  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:20.346943  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:20.347001  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:20.383379  236655 cri.go:89] found id: ""
	I1020 12:42:20.383402  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.383411  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:20.383418  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:20.383470  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:20.418045  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:20.418071  236655 cri.go:89] found id: ""
	I1020 12:42:20.418080  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:20.418130  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:20.422785  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:20.422903  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:20.457488  236655 cri.go:89] found id: ""
	I1020 12:42:20.457513  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.457524  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:20.457531  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:20.457589  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:20.487727  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:20.487751  236655 cri.go:89] found id: ""
	I1020 12:42:20.487761  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:20.487853  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:20.492928  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:20.492998  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:20.522709  236655 cri.go:89] found id: ""
	I1020 12:42:20.522743  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.522752  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:20.522760  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:20.522872  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:20.553404  236655 cri.go:89] found id: ""
	I1020 12:42:20.553426  236655 logs.go:282] 0 containers: []
	W1020 12:42:20.553436  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:20.553463  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:20.553479  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:20.673393  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:20.673446  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:20.693218  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:20.693261  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:20.772032  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:20.772061  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:20.772078  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:20.836385  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:20.836426  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:20.931525  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:20.931570  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:20.970665  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:20.970695  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:21.064386  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:21.064426  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:21.117395  258335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:21.204702  258335 kubeadm.go:1113] duration metric: took 3.695131505s to wait for elevateKubeSystemPrivileges
	I1020 12:42:21.204735  258335 kubeadm.go:402] duration metric: took 15.341256879s to StartCluster
	I1020 12:42:21.204756  258335 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:21.204839  258335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:21.206160  258335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:21.206390  258335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:42:21.206461  258335 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:21.206580  258335 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:42:21.206678  258335 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-916479"
	I1020 12:42:21.206694  258335 addons.go:69] Setting default-storageclass=true in profile "newest-cni-916479"
	I1020 12:42:21.206717  258335 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-916479"
	I1020 12:42:21.206719  258335 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-916479"
	I1020 12:42:21.206721  258335 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:21.206757  258335 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:21.207170  258335 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:21.207330  258335 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:21.210992  258335 out.go:179] * Verifying Kubernetes components...
	I1020 12:42:21.212973  258335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:21.239715  258335 addons.go:238] Setting addon default-storageclass=true in "newest-cni-916479"
	I1020 12:42:21.239759  258335 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:21.240317  258335 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:21.241037  258335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:42:21.243338  258335 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:21.243359  258335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:42:21.243421  258335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:21.281791  258335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:21.284427  258335 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:21.284460  258335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:42:21.284516  258335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:21.318005  258335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:21.320347  258335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:42:21.389660  258335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:21.421152  258335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:21.457959  258335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:21.530602  258335 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 12:42:21.531790  258335 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:42:21.531854  258335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:42:21.741277  258335 api_server.go:72] duration metric: took 534.776901ms to wait for apiserver process to appear ...
	I1020 12:42:21.741306  258335 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:42:21.741324  258335 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:42:21.750878  258335 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:42:21.751969  258335 api_server.go:141] control plane version: v1.34.1
	I1020 12:42:21.751997  258335 api_server.go:131] duration metric: took 10.683808ms to wait for apiserver health ...
	I1020 12:42:21.752006  258335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:42:21.754914  258335 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:42:21.755068  258335 system_pods.go:59] 5 kube-system pods found
	I1020 12:42:21.755098  258335 system_pods.go:61] "etcd-newest-cni-916479" [6cc5b1dc-6bb0-463f-9043-3a9746e939bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:42:21.755107  258335 system_pods.go:61] "kube-apiserver-newest-cni-916479" [b8088dee-3005-4d3c-8753-f41354d80508] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:42:21.755116  258335 system_pods.go:61] "kube-controller-manager-newest-cni-916479" [9a7c1de4-c31f-4d72-852c-80f5dbadd6bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:42:21.755123  258335 system_pods.go:61] "kube-scheduler-newest-cni-916479" [3642e359-9ffa-4416-a373-f3ea7bdceefd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:42:21.755128  258335 system_pods.go:61] "storage-provisioner" [25f66360-7044-408f-ba49-a64e624206c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 12:42:21.755134  258335 system_pods.go:74] duration metric: took 3.123473ms to wait for pod list to return data ...
	I1020 12:42:21.755144  258335 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:42:21.757506  258335 default_sa.go:45] found service account: "default"
	I1020 12:42:21.757532  258335 default_sa.go:55] duration metric: took 2.382239ms for default service account to be created ...
	I1020 12:42:21.757542  258335 kubeadm.go:586] duration metric: took 551.04872ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 12:42:21.757561  258335 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:42:21.758176  258335 addons.go:514] duration metric: took 551.594484ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:42:21.760142  258335 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:42:21.760168  258335 node_conditions.go:123] node cpu capacity is 8
	I1020 12:42:21.760183  258335 node_conditions.go:105] duration metric: took 2.617772ms to run NodePressure ...
	I1020 12:42:21.760195  258335 start.go:241] waiting for startup goroutines ...
	I1020 12:42:22.035668  258335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-916479" context rescaled to 1 replicas
	I1020 12:42:22.035707  258335 start.go:246] waiting for cluster config update ...
	I1020 12:42:22.035720  258335 start.go:255] writing updated cluster config ...
	I1020 12:42:22.036055  258335 ssh_runner.go:195] Run: rm -f paused
	I1020 12:42:22.096576  258335 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:42:22.098210  258335 out.go:179] * Done! kubectl is now configured to use "newest-cni-916479" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.123834626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.124205559Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5bff4dc6-89f3-4ab1-9cb4-49067f006b4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.127405087Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.128366802Z" level=info msg="Ran pod sandbox 8e5b4e95f6aed0b6b09b4bf53743e237b0da0704dbbad12013407e372df252b4 with infra container: kube-system/kindnet-zntlb/POD" id=5bff4dc6-89f3-4ab1-9cb4-49067f006b4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.128824934Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c27e779e-125b-4f0a-b916-b4e6b1fd066c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.130543483Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=fd29926f-5c2a-4c27-94ac-bd4dcd517a28 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.130622764Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.131566262Z" level=info msg="Ran pod sandbox 8781eedc14f2c7ab02415650a249c9951bb0658c9ba9177d41719cde6b57efd7 with infra container: kube-system/kube-proxy-csrfg/POD" id=c27e779e-125b-4f0a-b916-b4e6b1fd066c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.132684136Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b3719002-d389-4a8c-b22e-80eb9bfd9378 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.133026774Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5245fd7c-9432-4470-a85a-a7a838479983 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.134120043Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e9c6823f-afe0-44b5-8ca2-3c54b47a1a37 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.139282599Z" level=info msg="Creating container: kube-system/kindnet-zntlb/kindnet-cni" id=a4f90eb8-bac1-48dc-8865-48751ad2138a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.139407497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.140943059Z" level=info msg="Creating container: kube-system/kube-proxy-csrfg/kube-proxy" id=a801a4b8-e860-444e-a374-d8530b08a925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.141103241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.148543079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.14964881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.151979291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.153258697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.189794845Z" level=info msg="Created container e3ff8025739d89ac6b0d4b6b71d103dc914d2ab0b8d6825c23d20de3a590105f: kube-system/kindnet-zntlb/kindnet-cni" id=a4f90eb8-bac1-48dc-8865-48751ad2138a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.191647963Z" level=info msg="Starting container: e3ff8025739d89ac6b0d4b6b71d103dc914d2ab0b8d6825c23d20de3a590105f" id=64e0df44-0e4f-42df-ba78-b197bafb35bd name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.194175534Z" level=info msg="Started container" PID=1585 containerID=e3ff8025739d89ac6b0d4b6b71d103dc914d2ab0b8d6825c23d20de3a590105f description=kube-system/kindnet-zntlb/kindnet-cni id=64e0df44-0e4f-42df-ba78-b197bafb35bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e5b4e95f6aed0b6b09b4bf53743e237b0da0704dbbad12013407e372df252b4
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.198742344Z" level=info msg="Created container a9bee5efe401e7ae6c527d14d7d37f68bd911e2a1653d4bca143bc9e45065978: kube-system/kube-proxy-csrfg/kube-proxy" id=a801a4b8-e860-444e-a374-d8530b08a925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.19988618Z" level=info msg="Starting container: a9bee5efe401e7ae6c527d14d7d37f68bd911e2a1653d4bca143bc9e45065978" id=2c93d556-16c3-4145-9080-807b4b5c4a18 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:22 newest-cni-916479 crio[777]: time="2025-10-20T12:42:22.204521697Z" level=info msg="Started container" PID=1586 containerID=a9bee5efe401e7ae6c527d14d7d37f68bd911e2a1653d4bca143bc9e45065978 description=kube-system/kube-proxy-csrfg/kube-proxy id=2c93d556-16c3-4145-9080-807b4b5c4a18 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8781eedc14f2c7ab02415650a249c9951bb0658c9ba9177d41719cde6b57efd7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a9bee5efe401e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   8781eedc14f2c       kube-proxy-csrfg                            kube-system
	e3ff8025739d8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   8e5b4e95f6aed       kindnet-zntlb                               kube-system
	181f911637908       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   2636e8639ddf8       kube-scheduler-newest-cni-916479            kube-system
	07b6d2eb08f86       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   58e52935c8e2c       kube-apiserver-newest-cni-916479            kube-system
	003c08c4f7145       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   ab77e6d3a604d       kube-controller-manager-newest-cni-916479   kube-system
	fe9dc26a08c9a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   93d88f0c18ad4       etcd-newest-cni-916479                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-916479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-916479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=newest-cni-916479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_42_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:42:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-916479
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:42:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:42:16 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:42:16 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:42:16 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 20 Oct 2025 12:42:16 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-916479
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                451dc7fc-eabb-4f6f-b460-dd1caba110ee
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-916479                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-zntlb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-916479             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-916479    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-csrfg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-916479             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-916479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-916479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-916479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-916479 event: Registered Node newest-cni-916479 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [fe9dc26a08c9a1ef594fb68f45db583be4044248699ed197bc6c943bccaa8a84] <==
	{"level":"warn","ts":"2025-10-20T12:42:13.312423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.321742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.329305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.337655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.346075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.353696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.360959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.382909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.390889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.398611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:13.445903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:18.829864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.210687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:18.829975Z","caller":"traceutil/trace.go:172","msg":"trace[434011412] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:282; }","duration":"151.375401ms","start":"2025-10-20T12:42:18.678580Z","end":"2025-10-20T12:42:18.829956Z","steps":["trace[434011412] 'agreement among raft nodes before linearized reading'  (duration: 25.49993ms)","trace[434011412] 'range keys from in-memory index tree'  (duration: 125.660414ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:18.830526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.880926ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502403880628 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/job-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/job-controller\" value_size:119 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:18.830616Z","caller":"traceutil/trace.go:172","msg":"trace[816825441] transaction","detail":"{read_only:false; response_revision:283; number_of_response:1; }","duration":"240.995702ms","start":"2025-10-20T12:42:18.589607Z","end":"2025-10-20T12:42:18.830603Z","steps":["trace[816825441] 'process raft request'  (duration: 114.542638ms)","trace[816825441] 'compare'  (duration: 125.768848ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:19.088261Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.477803ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502403880632 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/disruption-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/disruption-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:19.088367Z","caller":"traceutil/trace.go:172","msg":"trace[1078765206] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"250.438751ms","start":"2025-10-20T12:42:18.837912Z","end":"2025-10-20T12:42:19.088351Z","steps":["trace[1078765206] 'process raft request'  (duration: 121.824199ms)","trace[1078765206] 'compare'  (duration: 128.375687ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:19.346469Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.057813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:19.346555Z","caller":"traceutil/trace.go:172","msg":"trace[1920863859] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:284; }","duration":"166.14718ms","start":"2025-10-20T12:42:19.180376Z","end":"2025-10-20T12:42:19.346523Z","steps":["trace[1920863859] 'agreement among raft nodes before linearized reading'  (duration: 36.883017ms)","trace[1920863859] 'range keys from in-memory index tree'  (duration: 129.134819ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:19.346546Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.181656ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502403880636 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" value_size:139 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:19.346614Z","caller":"traceutil/trace.go:172","msg":"trace[1081621371] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"248.707696ms","start":"2025-10-20T12:42:19.097896Z","end":"2025-10-20T12:42:19.346603Z","steps":["trace[1081621371] 'process raft request'  (duration: 119.419575ms)","trace[1081621371] 'compare'  (duration: 129.073019ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:19.568590Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.256976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-10-20T12:42:19.568658Z","caller":"traceutil/trace.go:172","msg":"trace[1418872032] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:288; }","duration":"120.376835ms","start":"2025-10-20T12:42:19.448268Z","end":"2025-10-20T12:42:19.568644Z","steps":["trace[1418872032] 'range keys from in-memory index tree'  (duration: 120.140491ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:20.107920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.123757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596502403880658 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" value_size:129 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:20.108061Z","caller":"traceutil/trace.go:172","msg":"trace[1663859393] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"168.559843ms","start":"2025-10-20T12:42:19.939476Z","end":"2025-10-20T12:42:20.108036Z","steps":["trace[1663859393] 'process raft request'  (duration: 35.275249ms)","trace[1663859393] 'compare'  (duration: 132.991873ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:42:23 up  1:24,  0 user,  load average: 2.74, 3.20, 2.10
	Linux newest-cni-916479 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3ff8025739d89ac6b0d4b6b71d103dc914d2ab0b8d6825c23d20de3a590105f] <==
	I1020 12:42:22.439619       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:42:22.439986       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:42:22.440165       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:42:22.440182       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:42:22.440203       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:42:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:42:22.645919       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:42:22.645966       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:42:22.645979       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:42:22.646149       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [07b6d2eb08f866b9fdfca908db716ba6cac00962f151342220f22bf843bfa2df] <==
	E1020 12:42:14.019275       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1020 12:42:14.064975       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1020 12:42:14.065096       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:42:14.073888       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:14.074843       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 12:42:14.086105       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:14.087614       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:42:14.269366       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:42:14.868400       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:42:14.872395       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:42:14.872409       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:42:15.495367       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:42:15.537414       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:42:15.675917       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:42:15.682853       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1020 12:42:15.684021       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:42:15.689255       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:42:15.943119       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:42:16.612534       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:42:16.623375       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:42:16.632046       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:42:21.599974       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:21.604798       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:21.794393       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1020 12:42:21.845938       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [003c08c4f7145d9566777f66d61c87b34b70c38ea723a9cbaeeee301846084d0] <==
	I1020 12:42:20.923122       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-916479" podCIDRs=["10.42.0.0/24"]
	I1020 12:42:20.924853       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:20.927309       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:42:20.933445       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:42:20.935852       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:42:20.941409       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 12:42:20.944631       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:42:20.944689       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:42:20.944753       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:42:20.944864       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 12:42:20.944916       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 12:42:20.945306       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:42:20.945694       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 12:42:20.947029       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 12:42:20.949252       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 12:42:20.949369       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:42:20.952255       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:20.954569       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 12:42:20.955753       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:42:20.963327       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 12:42:20.969483       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 12:42:20.984728       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:42:20.991886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:42:20.991911       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:42:20.991920       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a9bee5efe401e7ae6c527d14d7d37f68bd911e2a1653d4bca143bc9e45065978] <==
	I1020 12:42:22.251191       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:42:22.322230       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:42:22.422426       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:42:22.422467       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:42:22.422582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:42:22.446480       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:42:22.446630       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:42:22.453001       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:42:22.453556       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:42:22.453580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:22.455872       1 config.go:309] "Starting node config controller"
	I1020 12:42:22.455913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:42:22.455928       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:42:22.455878       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:42:22.455942       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:42:22.456076       1 config.go:200] "Starting service config controller"
	I1020 12:42:22.456092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:42:22.456128       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:42:22.456133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:42:22.559919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:42:22.560153       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:42:22.560186       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [181f9116379085f71ba3a24b8b712ea9bde6e9a5a72ee334c89ea202fa9170a9] <==
	E1020 12:42:13.933693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:42:13.931261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:42:13.933807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:42:13.933855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:42:13.933866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:42:13.934866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:42:13.933444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:42:13.933352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:42:13.934953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:42:13.933732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:42:14.749719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:42:14.812682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:42:14.812687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:42:14.968288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:42:14.997688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:42:15.030172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:42:15.053471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:42:15.055379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:42:15.055581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:42:15.078122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:42:15.146864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:42:15.163166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1020 12:42:15.211458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:42:15.257140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1020 12:42:17.828230       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.459471    1315 apiserver.go:52] "Watching apiserver"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.470523    1315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.502601    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.502660    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.502831    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.502852    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: E1020 12:42:17.516565    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-916479\" already exists" pod="kube-system/kube-scheduler-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: E1020 12:42:17.525900    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-916479\" already exists" pod="kube-system/etcd-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: E1020 12:42:17.526409    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-916479\" already exists" pod="kube-system/kube-controller-manager-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: E1020 12:42:17.526650    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-916479\" already exists" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.531467    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-916479" podStartSLOduration=1.531446015 podStartE2EDuration="1.531446015s" podCreationTimestamp="2025-10-20 12:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:17.531163579 +0000 UTC m=+1.144081651" watchObservedRunningTime="2025-10-20 12:42:17.531446015 +0000 UTC m=+1.144364087"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.557352    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-916479" podStartSLOduration=1.5573255750000001 podStartE2EDuration="1.557325575s" podCreationTimestamp="2025-10-20 12:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:17.544265063 +0000 UTC m=+1.157183139" watchObservedRunningTime="2025-10-20 12:42:17.557325575 +0000 UTC m=+1.170243647"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.598933    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-916479" podStartSLOduration=2.5989132230000003 podStartE2EDuration="2.598913223s" podCreationTimestamp="2025-10-20 12:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:17.558182605 +0000 UTC m=+1.171100677" watchObservedRunningTime="2025-10-20 12:42:17.598913223 +0000 UTC m=+1.211831294"
	Oct 20 12:42:17 newest-cni-916479 kubelet[1315]: I1020 12:42:17.615603    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-916479" podStartSLOduration=1.615584142 podStartE2EDuration="1.615584142s" podCreationTimestamp="2025-10-20 12:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:17.599900556 +0000 UTC m=+1.212818628" watchObservedRunningTime="2025-10-20 12:42:17.615584142 +0000 UTC m=+1.228502213"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.025590    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.026685    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908271    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-kube-proxy\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908316    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-xtables-lock\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908329    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-lib-modules\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908347    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r52rq\" (UniqueName: \"kubernetes.io/projected/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-kube-api-access-r52rq\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908369    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-xtables-lock\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908388    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-lib-modules\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908403    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-cni-cfg\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:21 newest-cni-916479 kubelet[1315]: I1020 12:42:21.908417    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprw4\" (UniqueName: \"kubernetes.io/projected/c49499e7-f553-4426-9c41-b6e9c93c1ee1-kube-api-access-hprw4\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:22 newest-cni-916479 kubelet[1315]: I1020 12:42:22.570465    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-csrfg" podStartSLOduration=1.5704392120000001 podStartE2EDuration="1.570439212s" podCreationTimestamp="2025-10-20 12:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:22.552430668 +0000 UTC m=+6.165348736" watchObservedRunningTime="2025-10-20 12:42:22.570439212 +0000 UTC m=+6.183357284"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-916479 -n newest-cni-916479
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-916479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-vzfdm storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner: exit status 1 (70.328531ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-vzfdm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-916479 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-916479 --alsologtostderr -v=1: exit status 80 (2.362421717s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-916479 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:42:38.242204  271459 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:42:38.242496  271459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:38.242507  271459 out.go:374] Setting ErrFile to fd 2...
	I1020 12:42:38.242511  271459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:38.242687  271459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:42:38.242949  271459 out.go:368] Setting JSON to false
	I1020 12:42:38.243007  271459 mustload.go:65] Loading cluster: newest-cni-916479
	I1020 12:42:38.243352  271459 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:38.243719  271459 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:38.262377  271459 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:38.262677  271459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:38.323356  271459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:89 SystemTime:2025-10-20 12:42:38.31294185 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:38.324047  271459 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-916479 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 12:42:38.326267  271459 out.go:179] * Pausing node newest-cni-916479 ... 
	I1020 12:42:38.327599  271459 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:38.327922  271459 ssh_runner.go:195] Run: systemctl --version
	I1020 12:42:38.327967  271459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:38.348592  271459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:38.450893  271459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:42:38.463922  271459 pause.go:52] kubelet running: true
	I1020 12:42:38.463977  271459 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:42:38.606762  271459 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:42:38.606872  271459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:42:38.677480  271459 cri.go:89] found id: "376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52"
	I1020 12:42:38.677513  271459 cri.go:89] found id: "a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f"
	I1020 12:42:38.677519  271459 cri.go:89] found id: "34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585"
	I1020 12:42:38.677524  271459 cri.go:89] found id: "4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3"
	I1020 12:42:38.677529  271459 cri.go:89] found id: "7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a"
	I1020 12:42:38.677534  271459 cri.go:89] found id: "567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b"
	I1020 12:42:38.677538  271459 cri.go:89] found id: ""
	I1020 12:42:38.677598  271459 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:42:38.690673  271459 retry.go:31] will retry after 312.985231ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:38Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:42:39.004321  271459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:42:39.018084  271459 pause.go:52] kubelet running: false
	I1020 12:42:39.018141  271459 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:42:39.137817  271459 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:42:39.137908  271459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:42:39.204979  271459 cri.go:89] found id: "376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52"
	I1020 12:42:39.205003  271459 cri.go:89] found id: "a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f"
	I1020 12:42:39.205006  271459 cri.go:89] found id: "34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585"
	I1020 12:42:39.205010  271459 cri.go:89] found id: "4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3"
	I1020 12:42:39.205013  271459 cri.go:89] found id: "7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a"
	I1020 12:42:39.205016  271459 cri.go:89] found id: "567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b"
	I1020 12:42:39.205018  271459 cri.go:89] found id: ""
	I1020 12:42:39.205055  271459 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:42:39.216841  271459 retry.go:31] will retry after 351.278477ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:39Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:42:39.568373  271459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:42:39.586091  271459 pause.go:52] kubelet running: false
	I1020 12:42:39.586144  271459 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:42:39.714545  271459 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:42:39.714630  271459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:42:39.786712  271459 cri.go:89] found id: "376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52"
	I1020 12:42:39.786750  271459 cri.go:89] found id: "a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f"
	I1020 12:42:39.786755  271459 cri.go:89] found id: "34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585"
	I1020 12:42:39.786760  271459 cri.go:89] found id: "4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3"
	I1020 12:42:39.786764  271459 cri.go:89] found id: "7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a"
	I1020 12:42:39.786809  271459 cri.go:89] found id: "567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b"
	I1020 12:42:39.786820  271459 cri.go:89] found id: ""
	I1020 12:42:39.786867  271459 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:42:39.798766  271459 retry.go:31] will retry after 528.930196ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:39Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:42:40.328559  271459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:42:40.342578  271459 pause.go:52] kubelet running: false
	I1020 12:42:40.342642  271459 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:42:40.463885  271459 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:42:40.463952  271459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:42:40.533492  271459 cri.go:89] found id: "376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52"
	I1020 12:42:40.533516  271459 cri.go:89] found id: "a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f"
	I1020 12:42:40.533520  271459 cri.go:89] found id: "34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585"
	I1020 12:42:40.533523  271459 cri.go:89] found id: "4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3"
	I1020 12:42:40.533526  271459 cri.go:89] found id: "7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a"
	I1020 12:42:40.533529  271459 cri.go:89] found id: "567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b"
	I1020 12:42:40.533532  271459 cri.go:89] found id: ""
	I1020 12:42:40.533569  271459 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:42:40.548083  271459 out.go:203] 
	W1020 12:42:40.549430  271459 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:42:40.549453  271459 out.go:285] * 
	* 
	W1020 12:42:40.553371  271459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:42:40.554681  271459 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-916479 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-916479
helpers_test.go:243: (dbg) docker inspect newest-cni-916479:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef",
	        "Created": "2025-10-20T12:42:01.570705232Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:42:27.482346674Z",
	            "FinishedAt": "2025-10-20T12:42:26.546948302Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/hosts",
	        "LogPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef-json.log",
	        "Name": "/newest-cni-916479",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-916479:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-916479",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef",
	                "LowerDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/merged",
	                "UpperDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/diff",
	                "WorkDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-916479",
	                "Source": "/var/lib/docker/volumes/newest-cni-916479/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-916479",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-916479",
	                "name.minikube.sigs.k8s.io": "newest-cni-916479",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb286f19d48f465c7b954e1dec027a01a52b53368b65df18834f9dd643e26424",
	            "SandboxKey": "/var/run/docker/netns/fb286f19d48f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-916479": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:00:96:a5:10:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0498bf893ff4ca5c840f9bd85d2a414a351b283489487091a509c21cecdac157",
	                    "EndpointID": "ee61021587d6b272e748b9be55e648c10a614ae79444c0500c2e3f1106e3f44e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-916479",
	                        "f767c4ce93d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479: exit status 2 (370.696562ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-916479 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-916479 logs -n 25: (1.240205551s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-649841 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ addons  │ enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:40 UTC │
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-874012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ stop    │ -p newest-cni-916479 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-916479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ newest-cni-916479 image list --format=json                                                                                                                                                                                                    │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ pause   │ -p newest-cni-916479 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:42:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:42:27.224550  268521 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:42:27.224652  268521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:27.224663  268521 out.go:374] Setting ErrFile to fd 2...
	I1020 12:42:27.224670  268521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:27.224906  268521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:42:27.225376  268521 out.go:368] Setting JSON to false
	I1020 12:42:27.226533  268521 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5096,"bootTime":1760959051,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:42:27.226593  268521 start.go:141] virtualization: kvm guest
	I1020 12:42:27.228712  268521 out.go:179] * [newest-cni-916479] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:42:27.230351  268521 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:42:27.230349  268521 notify.go:220] Checking for updates...
	I1020 12:42:27.233437  268521 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:42:27.234652  268521 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:27.236076  268521 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:42:27.237293  268521 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:42:27.238513  268521 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:42:27.240488  268521 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:27.241286  268521 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:42:27.267167  268521 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:42:27.267334  268521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:27.331800  268521 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-20 12:42:27.319855446 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:27.331949  268521 docker.go:318] overlay module found
	I1020 12:42:27.334292  268521 out.go:179] * Using the docker driver based on existing profile
	I1020 12:42:27.335780  268521 start.go:305] selected driver: docker
	I1020 12:42:27.335800  268521 start.go:925] validating driver "docker" against &{Name:newest-cni-916479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-916479 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:27.335914  268521 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:42:27.336679  268521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:27.403595  268521 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-20 12:42:27.391348834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:27.403982  268521 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 12:42:27.404014  268521 cni.go:84] Creating CNI manager for ""
	I1020 12:42:27.404077  268521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:27.404120  268521 start.go:349] cluster config:
	{Name:newest-cni-916479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-916479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:27.406794  268521 out.go:179] * Starting "newest-cni-916479" primary control-plane node in "newest-cni-916479" cluster
	I1020 12:42:27.408091  268521 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:42:27.409468  268521 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:42:27.410697  268521 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:27.410733  268521 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:42:27.410756  268521 cache.go:58] Caching tarball of preloaded images
	I1020 12:42:27.410752  268521 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:42:27.410880  268521 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:42:27.410895  268521 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:42:27.411037  268521 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/config.json ...
	I1020 12:42:27.432468  268521 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:42:27.432486  268521 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:42:27.432501  268521 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:42:27.432526  268521 start.go:360] acquireMachinesLock for newest-cni-916479: {Name:mkf824b1211ecf97f3eacf6ad91e653f110e663f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:42:27.432591  268521 start.go:364] duration metric: took 42.056µs to acquireMachinesLock for "newest-cni-916479"
	I1020 12:42:27.432611  268521 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:42:27.432617  268521 fix.go:54] fixHost starting: 
	I1020 12:42:27.432907  268521 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:27.451154  268521 fix.go:112] recreateIfNeeded on newest-cni-916479: state=Stopped err=<nil>
	W1020 12:42:27.451188  268521 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:42:26.577257  263183 out.go:252]   - Generating certificates and keys ...
	I1020 12:42:26.577337  263183 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:42:26.577402  263183 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:42:26.722192  263183 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:42:26.901438  263183 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:42:27.364186  263183 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:42:27.915810  263183 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:42:27.957468  263183 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:42:27.957684  263183 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-907116 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1020 12:42:28.129885  263183 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:42:28.130053  263183 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-907116 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1020 12:42:28.478161  263183 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:42:28.562336  263183 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:42:28.592859  263183 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:42:28.592964  263183 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:42:29.057322  263183 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:42:29.444831  263183 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:42:29.603448  263183 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:42:29.988862  263183 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:42:30.257401  263183 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:42:30.257985  263183 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:42:30.261845  263183 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:42:26.828607  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:26.829064  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:26.829127  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:26.829190  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:26.858160  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:26.858190  236655 cri.go:89] found id: ""
	I1020 12:42:26.858200  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:26.858313  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:26.862312  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:26.862366  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:26.892597  236655 cri.go:89] found id: ""
	I1020 12:42:26.892631  236655 logs.go:282] 0 containers: []
	W1020 12:42:26.892644  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:26.892653  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:26.892714  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:26.926228  236655 cri.go:89] found id: ""
	I1020 12:42:26.926259  236655 logs.go:282] 0 containers: []
	W1020 12:42:26.926270  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:26.926277  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:26.926342  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:26.959334  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:26.959359  236655 cri.go:89] found id: ""
	I1020 12:42:26.959368  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:26.959425  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:26.963918  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:26.963978  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:26.992934  236655 cri.go:89] found id: ""
	I1020 12:42:26.992963  236655 logs.go:282] 0 containers: []
	W1020 12:42:26.992973  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:26.992980  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:26.993038  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:27.023517  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:27.023542  236655 cri.go:89] found id: ""
	I1020 12:42:27.023552  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:27.023614  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:27.027920  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:27.027986  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:27.057691  236655 cri.go:89] found id: ""
	I1020 12:42:27.057716  236655 logs.go:282] 0 containers: []
	W1020 12:42:27.057727  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:27.057734  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:27.057810  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:27.086756  236655 cri.go:89] found id: ""
	I1020 12:42:27.086796  236655 logs.go:282] 0 containers: []
	W1020 12:42:27.086806  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:27.086816  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:27.086830  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:27.118265  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:27.118293  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:27.180538  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:27.180569  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:27.213117  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:27.213149  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:27.314952  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:27.314988  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:27.332539  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:27.332563  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:27.405303  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:27.405325  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:27.405339  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:27.439578  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:27.439607  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:30.008289  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:30.008804  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:30.008858  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:30.008909  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:30.038179  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:30.038201  236655 cri.go:89] found id: ""
	I1020 12:42:30.038210  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:30.038269  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:30.042344  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:30.042400  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:30.068614  236655 cri.go:89] found id: ""
	I1020 12:42:30.068635  236655 logs.go:282] 0 containers: []
	W1020 12:42:30.068642  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:30.068647  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:30.068707  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:30.098547  236655 cri.go:89] found id: ""
	I1020 12:42:30.098573  236655 logs.go:282] 0 containers: []
	W1020 12:42:30.098587  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:30.098595  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:30.098647  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:30.130753  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:30.130798  236655 cri.go:89] found id: ""
	I1020 12:42:30.130808  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:30.130870  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:30.135241  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:30.135318  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:30.165704  236655 cri.go:89] found id: ""
	I1020 12:42:30.165840  236655 logs.go:282] 0 containers: []
	W1020 12:42:30.165852  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:30.165858  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:30.165917  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:30.194258  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:30.194283  236655 cri.go:89] found id: ""
	I1020 12:42:30.194298  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:30.194358  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:30.198393  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:30.198466  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:30.225993  236655 cri.go:89] found id: ""
	I1020 12:42:30.226015  236655 logs.go:282] 0 containers: []
	W1020 12:42:30.226042  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:30.226052  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:30.226105  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:30.254868  236655 cri.go:89] found id: ""
	I1020 12:42:30.254899  236655 logs.go:282] 0 containers: []
	W1020 12:42:30.254911  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:30.254921  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:30.254932  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:30.289396  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:30.289423  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:30.355071  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:30.355107  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:30.387255  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:30.387279  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:30.452356  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:30.452399  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:30.487549  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:30.487581  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:30.581398  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:30.581434  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:30.597103  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:30.597137  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:30.658761  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:27.453142  268521 out.go:252] * Restarting existing docker container for "newest-cni-916479" ...
	I1020 12:42:27.453236  268521 cli_runner.go:164] Run: docker start newest-cni-916479
	I1020 12:42:27.700348  268521 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:27.721103  268521 kic.go:430] container "newest-cni-916479" state is running.
	I1020 12:42:27.721570  268521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-916479
	I1020 12:42:27.741123  268521 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/config.json ...
	I1020 12:42:27.741394  268521 machine.go:93] provisionDockerMachine start ...
	I1020 12:42:27.741472  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:27.761171  268521 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:27.761389  268521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1020 12:42:27.761400  268521 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:42:27.762161  268521 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46588->127.0.0.1:33088: read: connection reset by peer
	I1020 12:42:30.909977  268521 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-916479
	
	I1020 12:42:30.910011  268521 ubuntu.go:182] provisioning hostname "newest-cni-916479"
	I1020 12:42:30.910077  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:30.927996  268521 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:30.928209  268521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1020 12:42:30.928223  268521 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-916479 && echo "newest-cni-916479" | sudo tee /etc/hostname
	I1020 12:42:31.076849  268521 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-916479
	
	I1020 12:42:31.076938  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:31.094789  268521 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:31.095078  268521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1020 12:42:31.095107  268521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-916479' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-916479/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-916479' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:42:31.237824  268521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:42:31.237856  268521 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:42:31.237892  268521 ubuntu.go:190] setting up certificates
	I1020 12:42:31.237904  268521 provision.go:84] configureAuth start
	I1020 12:42:31.237979  268521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-916479
	I1020 12:42:31.256722  268521 provision.go:143] copyHostCerts
	I1020 12:42:31.256817  268521 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:42:31.256834  268521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:42:31.256908  268521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:42:31.257003  268521 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:42:31.257013  268521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:42:31.257039  268521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:42:31.257099  268521 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:42:31.257107  268521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:42:31.257129  268521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:42:31.257184  268521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.newest-cni-916479 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-916479]
	I1020 12:42:31.568728  268521 provision.go:177] copyRemoteCerts
	I1020 12:42:31.568821  268521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:42:31.568864  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:31.591460  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:31.694943  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:42:31.714437  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:42:31.732910  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:42:31.751134  268521 provision.go:87] duration metric: took 513.215537ms to configureAuth
	I1020 12:42:31.751163  268521 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:42:31.751375  268521 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:31.751476  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:31.772495  268521 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:31.772842  268521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1020 12:42:31.772866  268521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:42:32.091980  268521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:42:32.092001  268521 machine.go:96] duration metric: took 4.350591218s to provisionDockerMachine
	I1020 12:42:32.092016  268521 start.go:293] postStartSetup for "newest-cni-916479" (driver="docker")
	I1020 12:42:32.092028  268521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:42:32.092098  268521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:42:32.092145  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:32.114400  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:32.223857  268521 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:42:32.228612  268521 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:42:32.228641  268521 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:42:32.228653  268521 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:42:32.228717  268521 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:42:32.228849  268521 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:42:32.228978  268521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:42:32.238199  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:42:32.259674  268521 start.go:296] duration metric: took 167.642925ms for postStartSetup
	I1020 12:42:32.259766  268521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:42:32.259839  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:32.286875  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:32.391882  268521 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:42:32.397128  268521 fix.go:56] duration metric: took 4.964505896s for fixHost
	I1020 12:42:32.397157  268521 start.go:83] releasing machines lock for "newest-cni-916479", held for 4.96455416s
	I1020 12:42:32.397222  268521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-916479
	I1020 12:42:32.417239  268521 ssh_runner.go:195] Run: cat /version.json
	I1020 12:42:32.417301  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:32.417382  268521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:42:32.417463  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:32.441890  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:32.444865  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:32.631576  268521 ssh_runner.go:195] Run: systemctl --version
	I1020 12:42:32.638877  268521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:42:32.689046  268521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:42:32.694252  268521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:42:32.694316  268521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:42:32.702921  268521 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:42:32.702960  268521 start.go:495] detecting cgroup driver to use...
	I1020 12:42:32.703008  268521 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:42:32.703061  268521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:42:32.719851  268521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:42:32.733979  268521 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:42:32.734036  268521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:42:32.752114  268521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:42:32.767480  268521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:42:32.896272  268521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:42:33.042365  268521 docker.go:234] disabling docker service ...
	I1020 12:42:33.042433  268521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:42:33.062317  268521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:42:33.079454  268521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:42:33.164276  268521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:42:33.252698  268521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:42:33.265200  268521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:42:33.280856  268521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:42:33.280908  268521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.289610  268521 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:42:33.289666  268521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.298332  268521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.308531  268521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.317593  268521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:42:33.325496  268521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.336062  268521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.345038  268521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:33.353960  268521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:42:33.361853  268521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:42:33.370507  268521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:33.463276  268521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:42:33.564076  268521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:42:33.564141  268521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:42:33.568608  268521 start.go:563] Will wait 60s for crictl version
	I1020 12:42:33.568665  268521 ssh_runner.go:195] Run: which crictl
	I1020 12:42:33.572351  268521 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:42:33.598270  268521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:42:33.598363  268521 ssh_runner.go:195] Run: crio --version
	I1020 12:42:33.629639  268521 ssh_runner.go:195] Run: crio --version
	I1020 12:42:33.663798  268521 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:42:33.664946  268521 cli_runner.go:164] Run: docker network inspect newest-cni-916479 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:42:33.682270  268521 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:42:33.686412  268521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:42:33.698379  268521 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1020 12:42:30.263429  263183 out.go:252]   - Booting up control plane ...
	I1020 12:42:30.263592  263183 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:42:30.263715  263183 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:42:30.265517  263183 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:42:30.280227  263183 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:42:30.280435  263183 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:42:30.287369  263183 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:42:30.287544  263183 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:42:30.287608  263183 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:42:30.391150  263183 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:42:30.391302  263183 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:42:30.891944  263183 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.864412ms
	I1020 12:42:30.894767  263183 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:42:30.894900  263183 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1020 12:42:30.895034  263183 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:42:30.895151  263183 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:42:33.017708  263183 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.122733763s
	I1020 12:42:33.096809  263183 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.201959691s
	I1020 12:42:33.699539  268521 kubeadm.go:883] updating cluster {Name:newest-cni-916479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-916479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:42:33.699654  268521 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:33.699707  268521 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:42:33.733381  268521 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:42:33.733402  268521 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:42:33.733443  268521 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:42:33.759589  268521 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:42:33.759612  268521 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:42:33.759621  268521 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:42:33.759743  268521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-916479 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-916479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:42:33.759845  268521 ssh_runner.go:195] Run: crio config
	I1020 12:42:33.803920  268521 cni.go:84] Creating CNI manager for ""
	I1020 12:42:33.803941  268521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:33.803958  268521 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1020 12:42:33.803983  268521 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-916479 NodeName:newest-cni-916479 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:42:33.804112  268521 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-916479"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:42:33.804175  268521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:42:33.812539  268521 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:42:33.812614  268521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:42:33.820129  268521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 12:42:33.832758  268521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:42:33.846222  268521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1020 12:42:33.858552  268521 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:42:33.862669  268521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:42:33.872515  268521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:33.968191  268521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:33.992674  268521 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479 for IP: 192.168.85.2
	I1020 12:42:33.992702  268521 certs.go:195] generating shared ca certs ...
	I1020 12:42:33.992726  268521 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:33.992912  268521 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:42:33.992985  268521 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:42:33.993018  268521 certs.go:257] generating profile certs ...
	I1020 12:42:33.993141  268521 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/client.key
	I1020 12:42:33.993220  268521 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/apiserver.key.c2df0bd4
	I1020 12:42:33.993278  268521 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/proxy-client.key
	I1020 12:42:33.993468  268521 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:42:33.993518  268521 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:42:33.993532  268521 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:42:33.993567  268521 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:42:33.993600  268521 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:42:33.993631  268521 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:42:33.993687  268521 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:42:33.994454  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:42:34.031518  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:42:34.054206  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:42:34.076179  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:42:34.101967  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 12:42:34.122483  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:42:34.142088  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:42:34.160812  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/newest-cni-916479/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:42:34.181524  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:42:34.202716  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:42:34.225285  268521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:42:34.247421  268521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:42:34.263821  268521 ssh_runner.go:195] Run: openssl version
	I1020 12:42:34.271171  268521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:42:34.282561  268521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:42:34.287453  268521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:42:34.287516  268521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:42:34.339587  268521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:42:34.349602  268521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:42:34.359881  268521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:34.364510  268521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:34.364580  268521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:34.409165  268521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:42:34.418981  268521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:42:34.429101  268521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:42:34.433235  268521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:42:34.433297  268521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:42:34.476734  268521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:42:34.487176  268521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:42:34.491601  268521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:42:34.539314  268521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:42:34.587686  268521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:42:34.640809  268521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:42:34.697125  268521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:42:34.734416  268521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:42:34.769716  268521 kubeadm.go:400] StartCluster: {Name:newest-cni-916479 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-916479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:34.769871  268521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:42:34.769930  268521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:42:34.804635  268521 cri.go:89] found id: "34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585"
	I1020 12:42:34.804673  268521 cri.go:89] found id: "4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3"
	I1020 12:42:34.804678  268521 cri.go:89] found id: "7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a"
	I1020 12:42:34.804683  268521 cri.go:89] found id: "567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b"
	I1020 12:42:34.804688  268521 cri.go:89] found id: ""
	I1020 12:42:34.804733  268521 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:42:34.817143  268521 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:34Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:42:34.817221  268521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:42:34.825145  268521 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:42:34.825170  268521 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:42:34.825208  268521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:42:34.832948  268521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:42:34.833688  268521 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-916479" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:34.834204  268521 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-916479" cluster setting kubeconfig missing "newest-cni-916479" context setting]
	I1020 12:42:34.834987  268521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:34.836720  268521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:42:34.844587  268521 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 12:42:34.844615  268521 kubeadm.go:601] duration metric: took 19.44045ms to restartPrimaryControlPlane
	I1020 12:42:34.844622  268521 kubeadm.go:402] duration metric: took 74.959723ms to StartCluster
	I1020 12:42:34.844634  268521 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:34.844689  268521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:34.845668  268521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:34.845911  268521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:34.845975  268521 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:42:34.846113  268521 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-916479"
	I1020 12:42:34.846135  268521 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-916479"
	I1020 12:42:34.846135  268521 addons.go:69] Setting dashboard=true in profile "newest-cni-916479"
	I1020 12:42:34.846163  268521 addons.go:238] Setting addon dashboard=true in "newest-cni-916479"
	I1020 12:42:34.846159  268521 addons.go:69] Setting default-storageclass=true in profile "newest-cni-916479"
	I1020 12:42:34.846167  268521 config.go:182] Loaded profile config "newest-cni-916479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1020 12:42:34.846176  268521 addons.go:247] addon dashboard should already be in state true
	I1020 12:42:34.846187  268521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-916479"
	I1020 12:42:34.846215  268521 host.go:66] Checking if "newest-cni-916479" exists ...
	W1020 12:42:34.846147  268521 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:42:34.846251  268521 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:34.846550  268521 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:34.846559  268521 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:34.846701  268521 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:34.848637  268521 out.go:179] * Verifying Kubernetes components...
	I1020 12:42:34.850204  268521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:34.872026  268521 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 12:42:34.872029  268521 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:42:34.873045  268521 addons.go:238] Setting addon default-storageclass=true in "newest-cni-916479"
	W1020 12:42:34.873070  268521 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:42:34.873099  268521 host.go:66] Checking if "newest-cni-916479" exists ...
	I1020 12:42:34.873537  268521 cli_runner.go:164] Run: docker container inspect newest-cni-916479 --format={{.State.Status}}
	I1020 12:42:34.874410  268521 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:34.874426  268521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:42:34.874473  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:34.875079  268521 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 12:42:34.899067  263183 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.004159548s
	I1020 12:42:34.921153  263183 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:42:34.935084  263183 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:42:34.951178  263183 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:42:34.951479  263183 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-907116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:42:34.962490  263183 kubeadm.go:318] [bootstrap-token] Using token: pwxskb.mb7od8l3j01a0lgw
	I1020 12:42:33.160045  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:33.160449  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:33.160497  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:33.160550  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:33.187957  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:33.187983  236655 cri.go:89] found id: ""
	I1020 12:42:33.187993  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:33.188080  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:33.193298  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:33.193379  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:33.224006  236655 cri.go:89] found id: ""
	I1020 12:42:33.224049  236655 logs.go:282] 0 containers: []
	W1020 12:42:33.224057  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:33.224063  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:33.224117  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:33.252354  236655 cri.go:89] found id: ""
	I1020 12:42:33.252382  236655 logs.go:282] 0 containers: []
	W1020 12:42:33.252390  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:33.252396  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:33.252448  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:33.278950  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:33.278980  236655 cri.go:89] found id: ""
	I1020 12:42:33.278990  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:33.279047  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:33.282981  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:33.283040  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:33.311214  236655 cri.go:89] found id: ""
	I1020 12:42:33.311236  236655 logs.go:282] 0 containers: []
	W1020 12:42:33.311275  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:33.311283  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:33.311367  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:33.338592  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:33.338615  236655 cri.go:89] found id: ""
	I1020 12:42:33.338626  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:33.338681  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:33.342406  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:33.342466  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:33.368633  236655 cri.go:89] found id: ""
	I1020 12:42:33.368662  236655 logs.go:282] 0 containers: []
	W1020 12:42:33.368673  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:33.368681  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:33.368741  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:33.395191  236655 cri.go:89] found id: ""
	I1020 12:42:33.395223  236655 logs.go:282] 0 containers: []
	W1020 12:42:33.395234  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:33.395247  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:33.395265  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:33.429270  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:33.429294  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:33.526245  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:33.526281  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:33.541990  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:33.542017  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:33.601668  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:33.601689  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:33.601703  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:33.635889  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:33.635915  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:33.692867  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:33.692893  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:33.721306  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:33.721338  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:34.964446  263183 out.go:252]   - Configuring RBAC rules ...
	I1020 12:42:34.964593  263183 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:42:34.970908  263183 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:42:34.978703  263183 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:42:34.981762  263183 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:42:34.984252  263183 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:42:34.987528  263183 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:42:35.309982  263183 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:42:35.736716  263183 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:42:36.313463  263183 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:42:36.313558  263183 kubeadm.go:318] 
	I1020 12:42:36.313651  263183 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:42:36.313657  263183 kubeadm.go:318] 
	I1020 12:42:36.313794  263183 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:42:36.313817  263183 kubeadm.go:318] 
	I1020 12:42:36.313847  263183 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:42:36.313918  263183 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:42:36.313979  263183 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:42:36.313987  263183 kubeadm.go:318] 
	I1020 12:42:36.314061  263183 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:42:36.314067  263183 kubeadm.go:318] 
	I1020 12:42:36.314140  263183 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:42:36.314163  263183 kubeadm.go:318] 
	I1020 12:42:36.314235  263183 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:42:36.314333  263183 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:42:36.314426  263183 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:42:36.314432  263183 kubeadm.go:318] 
	I1020 12:42:36.314549  263183 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:42:36.314655  263183 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:42:36.314661  263183 kubeadm.go:318] 
	I1020 12:42:36.314788  263183 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pwxskb.mb7od8l3j01a0lgw \
	I1020 12:42:36.314931  263183 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:42:36.314960  263183 kubeadm.go:318] 	--control-plane 
	I1020 12:42:36.314975  263183 kubeadm.go:318] 
	I1020 12:42:36.315096  263183 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:42:36.315102  263183 kubeadm.go:318] 
	I1020 12:42:36.315214  263183 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pwxskb.mb7od8l3j01a0lgw \
	I1020 12:42:36.315353  263183 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:42:36.321018  263183 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:42:36.321207  263183 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:42:36.321237  263183 cni.go:84] Creating CNI manager for ""
	I1020 12:42:36.321246  263183 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:36.323236  263183 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:42:34.876165  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 12:42:34.876205  268521 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 12:42:34.876258  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:34.908601  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:34.910192  268521 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:34.910213  268521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:42:34.910263  268521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-916479
	I1020 12:42:34.915857  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:34.941144  268521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/newest-cni-916479/id_rsa Username:docker}
	I1020 12:42:35.002245  268521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:35.015273  268521 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:42:35.015333  268521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:42:35.030835  268521 api_server.go:72] duration metric: took 184.895458ms to wait for apiserver process to appear ...
	I1020 12:42:35.030861  268521 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:42:35.030882  268521 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:42:35.031548  268521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:35.046726  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 12:42:35.046750  268521 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 12:42:35.067999  268521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:35.068552  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 12:42:35.068574  268521 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 12:42:35.088250  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 12:42:35.088281  268521 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 12:42:35.114138  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 12:42:35.114161  268521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 12:42:35.133811  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 12:42:35.133841  268521 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 12:42:35.150012  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 12:42:35.150042  268521 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 12:42:35.164197  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 12:42:35.164232  268521 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 12:42:35.178654  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 12:42:35.178679  268521 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 12:42:35.193889  268521 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:42:35.193914  268521 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 12:42:35.208210  268521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:42:36.318168  268521 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 12:42:36.318197  268521 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 12:42:36.318214  268521 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:42:36.325690  268521 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 12:42:36.325717  268521 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 12:42:36.531663  268521 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:42:36.537873  268521 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:42:36.537898  268521 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:42:37.029509  268521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.997934726s)
	I1020 12:42:37.029569  268521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961536422s)
	I1020 12:42:37.029673  268521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.821425506s)
	I1020 12:42:37.031078  268521 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:42:37.032389  268521 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-916479 addons enable metrics-server
	
	I1020 12:42:37.036639  268521 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:42:37.036665  268521 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:42:37.042972  268521 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1020 12:42:37.044462  268521 addons.go:514] duration metric: took 2.19849165s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 12:42:37.531992  268521 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:42:37.536895  268521 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:42:37.537952  268521 api_server.go:141] control plane version: v1.34.1
	I1020 12:42:37.537983  268521 api_server.go:131] duration metric: took 2.507114351s to wait for apiserver health ...
	I1020 12:42:37.537994  268521 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:42:37.541292  268521 system_pods.go:59] 8 kube-system pods found
	I1020 12:42:37.541326  268521 system_pods.go:61] "coredns-66bc5c9577-vzfdm" [f7f4d775-e755-4492-a871-2112f78ad674] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 12:42:37.541338  268521 system_pods.go:61] "etcd-newest-cni-916479" [6cc5b1dc-6bb0-463f-9043-3a9746e939bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:42:37.541352  268521 system_pods.go:61] "kindnet-zntlb" [c49499e7-f553-4426-9c41-b6e9c93c1ee1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1020 12:42:37.541361  268521 system_pods.go:61] "kube-apiserver-newest-cni-916479" [b8088dee-3005-4d3c-8753-f41354d80508] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:42:37.541374  268521 system_pods.go:61] "kube-controller-manager-newest-cni-916479" [9a7c1de4-c31f-4d72-852c-80f5dbadd6bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:42:37.541384  268521 system_pods.go:61] "kube-proxy-csrfg" [2a56ea05-0dbb-4a3a-8510-14d952a4f69b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1020 12:42:37.541395  268521 system_pods.go:61] "kube-scheduler-newest-cni-916479" [3642e359-9ffa-4416-a373-f3ea7bdceefd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:42:37.541407  268521 system_pods.go:61] "storage-provisioner" [25f66360-7044-408f-ba49-a64e624206c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 12:42:37.541416  268521 system_pods.go:74] duration metric: took 3.415711ms to wait for pod list to return data ...
	I1020 12:42:37.541430  268521 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:42:37.543411  268521 default_sa.go:45] found service account: "default"
	I1020 12:42:37.543434  268521 default_sa.go:55] duration metric: took 1.997582ms for default service account to be created ...
	I1020 12:42:37.543447  268521 kubeadm.go:586] duration metric: took 2.697512444s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 12:42:37.543466  268521 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:42:37.545822  268521 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:42:37.545849  268521 node_conditions.go:123] node cpu capacity is 8
	I1020 12:42:37.545867  268521 node_conditions.go:105] duration metric: took 2.394782ms to run NodePressure ...
	I1020 12:42:37.545882  268521 start.go:241] waiting for startup goroutines ...
	I1020 12:42:37.545894  268521 start.go:246] waiting for cluster config update ...
	I1020 12:42:37.545910  268521 start.go:255] writing updated cluster config ...
	I1020 12:42:37.546230  268521 ssh_runner.go:195] Run: rm -f paused
	I1020 12:42:37.604835  268521 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:42:37.606821  268521 out.go:179] * Done! kubectl is now configured to use "newest-cni-916479" cluster and "default" namespace by default
	I1020 12:42:36.324572  263183 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:42:36.332120  263183 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:42:36.332145  263183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:42:36.357107  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:42:36.698869  263183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:42:36.698942  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:36.699075  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-907116 minikube.k8s.io/updated_at=2025_10_20T12_42_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=embed-certs-907116 minikube.k8s.io/primary=true
	I1020 12:42:36.826694  263183 ops.go:34] apiserver oom_adj: -16
	I1020 12:42:36.826827  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:37.327714  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:37.827757  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:38.327724  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:38.827590  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:39.327219  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:39.826972  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:40.327478  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:40.826919  263183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:42:40.918117  263183 kubeadm.go:1113] duration metric: took 4.219234551s to wait for elevateKubeSystemPrivileges
	I1020 12:42:40.918154  263183 kubeadm.go:402] duration metric: took 14.636502603s to StartCluster
	I1020 12:42:40.918176  263183 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:40.918250  263183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:40.920284  263183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:40.920573  263183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:42:40.920588  263183 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:42:40.920661  263183 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-907116"
	I1020 12:42:40.920684  263183 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-907116"
	I1020 12:42:40.920682  263183 addons.go:69] Setting default-storageclass=true in profile "embed-certs-907116"
	I1020 12:42:40.920707  263183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-907116"
	I1020 12:42:40.920762  263183 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:40.920565  263183 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:40.920708  263183 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:42:40.921195  263183 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:42:40.921534  263183 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:42:40.922588  263183 out.go:179] * Verifying Kubernetes components...
	I1020 12:42:40.924477  263183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:40.957154  263183 addons.go:238] Setting addon default-storageclass=true in "embed-certs-907116"
	I1020 12:42:40.957243  263183 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:42:40.957169  263183 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.39000327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.394388286Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=76a9aa77-c860-4257-a081-13f951c3ebd7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.395096798Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=05771de5-423b-43e0-aab1-987eac76ed55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.396180534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.396876689Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.397132938Z" level=info msg="Ran pod sandbox d59abef52c970b5fd24a5c106d78221f37ff2609f6d1aa16ffca4c0f9147b3d5 with infra container: kube-system/kindnet-zntlb/POD" id=76a9aa77-c860-4257-a081-13f951c3ebd7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.397766945Z" level=info msg="Ran pod sandbox be84201ce815662d9ae91ae29d8aa9d9b81f7c6c5054232f6d31984386af3f74 with infra container: kube-system/kube-proxy-csrfg/POD" id=05771de5-423b-43e0-aab1-987eac76ed55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.398469384Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ad666ca2-8ae3-4453-9c5a-86ffb10d3044 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.399476327Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=059f98ba-fe8e-4770-a3fa-dfe63dc5604d name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.399727706Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dfd6876d-4607-4004-ae73-aa36e4191cb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.400988997Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ab972e20-9aa9-4f06-b840-30ff3a7256d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.401259193Z" level=info msg="Creating container: kube-system/kindnet-zntlb/kindnet-cni" id=2949919b-76f4-43c7-9a41-6059ede1f293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.401368598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.401990915Z" level=info msg="Creating container: kube-system/kube-proxy-csrfg/kube-proxy" id=c827414e-d761-4570-9f43-06c34a88fe00 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.402131902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.406945374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.40755356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.409767317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.410447489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.43721617Z" level=info msg="Created container a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f: kube-system/kindnet-zntlb/kindnet-cni" id=2949919b-76f4-43c7-9a41-6059ede1f293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.437919457Z" level=info msg="Starting container: a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f" id=4a010b2d-eece-487c-8a7c-707472facc2a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.44016731Z" level=info msg="Started container" PID=1032 containerID=a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f description=kube-system/kindnet-zntlb/kindnet-cni id=4a010b2d-eece-487c-8a7c-707472facc2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d59abef52c970b5fd24a5c106d78221f37ff2609f6d1aa16ffca4c0f9147b3d5
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.441350115Z" level=info msg="Created container 376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52: kube-system/kube-proxy-csrfg/kube-proxy" id=c827414e-d761-4570-9f43-06c34a88fe00 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.442038262Z" level=info msg="Starting container: 376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52" id=7dfb4868-99cd-4e11-a2e5-5897b5ce786c name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.445126291Z" level=info msg="Started container" PID=1033 containerID=376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52 description=kube-system/kube-proxy-csrfg/kube-proxy id=7dfb4868-99cd-4e11-a2e5-5897b5ce786c name=/runtime.v1.RuntimeService/StartContainer sandboxID=be84201ce815662d9ae91ae29d8aa9d9b81f7c6c5054232f6d31984386af3f74
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	376c4310e513c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   be84201ce8156       kube-proxy-csrfg                            kube-system
	a44e701d9cff9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   d59abef52c970       kindnet-zntlb                               kube-system
	34142c2a23f3b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   b61c8f413fdf4       kube-controller-manager-newest-cni-916479   kube-system
	4ed1efea72e54       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   667922a2f6ee3       kube-apiserver-newest-cni-916479            kube-system
	7e4e331dd05d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   40117c94620f7       kube-scheduler-newest-cni-916479            kube-system
	567b442da6356       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   e0588b0b259a9       etcd-newest-cni-916479                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-916479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-916479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=newest-cni-916479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_42_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:42:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-916479
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:42:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-916479
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                451dc7fc-eabb-4f6f-b460-dd1caba110ee
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-916479                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-zntlb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-916479             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-916479    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-csrfg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-916479             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 25s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s   kubelet          Node newest-cni-916479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s   kubelet          Node newest-cni-916479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s   kubelet          Node newest-cni-916479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s   node-controller  Node newest-cni-916479 event: Registered Node newest-cni-916479 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-916479 event: Registered Node newest-cni-916479 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b] <==
	{"level":"warn","ts":"2025-10-20T12:42:35.609258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.614135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.625050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.638017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.642727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.649438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.656821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.666351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.671154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.678939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.693964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.700433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.707850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.716553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.730055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.737984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.746294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.754206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.760595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.768032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.784514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.797158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.804579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.812289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.864541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:42:41 up  1:25,  0 user,  load average: 3.48, 3.34, 2.16
	Linux newest-cni-916479 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f] <==
	I1020 12:42:37.660590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:42:37.660866       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:42:37.660998       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:42:37.661020       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:42:37.661117       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:42:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:42:37.940439       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:42:37.940476       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:42:37.940484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:42:37.940601       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:42:38.441196       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:42:38.441233       1 metrics.go:72] Registering metrics
	I1020 12:42:38.441310       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3] <==
	I1020 12:42:36.381420       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1020 12:42:36.381278       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:42:36.381298       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:42:36.383263       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:42:36.383160       1 aggregator.go:171] initial CRD sync complete...
	I1020 12:42:36.383860       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 12:42:36.383883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:42:36.383891       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:42:36.390974       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:42:36.391153       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:36.392203       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:42:36.399117       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:42:36.399198       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:42:36.430188       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:42:36.823807       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:42:36.857340       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:42:36.882370       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:42:36.894354       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:42:36.904918       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:42:36.943859       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.204.112"}
	I1020 12:42:36.954068       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.105.99"}
	I1020 12:42:37.282119       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:42:39.238487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:42:39.339353       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:42:39.389030       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585] <==
	I1020 12:42:38.935221       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:42:38.935245       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:42:38.935260       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 12:42:38.935398       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:42:38.935508       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:42:38.935598       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-916479"
	I1020 12:42:38.935687       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 12:42:38.935794       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 12:42:38.935856       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 12:42:38.935896       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 12:42:38.936007       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:42:38.936010       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:42:38.937195       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:42:38.939333       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:42:38.941837       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:42:38.941953       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1020 12:42:38.942020       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1020 12:42:38.942074       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1020 12:42:38.942083       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 12:42:38.942096       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 12:42:38.944228       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:42:38.946403       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:42:38.948620       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:38.963826       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:42:38.967158       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52] <==
	I1020 12:42:37.484484       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:42:37.538225       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:42:37.638370       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:42:37.638415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:42:37.638518       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:42:37.658294       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:42:37.658354       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:42:37.664317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:42:37.664641       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:42:37.664668       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:37.668338       1 config.go:200] "Starting service config controller"
	I1020 12:42:37.668364       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:42:37.668371       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:42:37.668379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:42:37.668436       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:42:37.668465       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:42:37.669020       1 config.go:309] "Starting node config controller"
	I1020 12:42:37.669077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:42:37.669087       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:42:37.768616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:42:37.768616       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:42:37.770227       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a] <==
	I1020 12:42:35.054055       1 serving.go:386] Generated self-signed cert in-memory
	W1020 12:42:36.308294       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 12:42:36.308333       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 12:42:36.308344       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 12:42:36.308354       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 12:42:36.395616       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:42:36.395727       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:36.399592       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:36.399650       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:36.400622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:42:36.400704       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:42:36.499919       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.132876     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-916479\" not found" node="newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.386150     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.407407     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-916479\" already exists" pod="kube-system/kube-scheduler-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.407455     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.421465     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-916479\" already exists" pod="kube-system/etcd-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.421516     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.430333     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-916479\" already exists" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.430375     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.438679     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-916479\" already exists" pod="kube-system/kube-controller-manager-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.461288     661 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.461705     661 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.461944     661 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.463738     661 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.078104     661 apiserver.go:52] "Watching apiserver"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.083927     661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.133435     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: E1020 12:42:37.140137     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-916479\" already exists" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180444     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-lib-modules\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180501     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-cni-cfg\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180658     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-xtables-lock\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180700     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-lib-modules\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180731     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-xtables-lock\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:38 newest-cni-916479 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:42:38 newest-cni-916479 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:42:38 newest-cni-916479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-916479 -n newest-cni-916479
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-916479 -n newest-cni-916479: exit status 2 (361.801162ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-916479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9: exit status 1 (64.530269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-vzfdm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-cftgj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wkch9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-916479
helpers_test.go:243: (dbg) docker inspect newest-cni-916479:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef",
	        "Created": "2025-10-20T12:42:01.570705232Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:42:27.482346674Z",
	            "FinishedAt": "2025-10-20T12:42:26.546948302Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/hosts",
	        "LogPath": "/var/lib/docker/containers/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef/f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef-json.log",
	        "Name": "/newest-cni-916479",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-916479:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-916479",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f767c4ce93d093ce38bad69c937dcb780fe4b88e36f0047e12efbb6dcd7dc3ef",
	                "LowerDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/merged",
	                "UpperDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/diff",
	                "WorkDir": "/var/lib/docker/overlay2/972670fa2c6ad8377499180ff37e45c87569b7462a3871ce6df2ede18d0d4614/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-916479",
	                "Source": "/var/lib/docker/volumes/newest-cni-916479/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-916479",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-916479",
	                "name.minikube.sigs.k8s.io": "newest-cni-916479",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb286f19d48f465c7b954e1dec027a01a52b53368b65df18834f9dd643e26424",
	            "SandboxKey": "/var/run/docker/netns/fb286f19d48f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-916479": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:00:96:a5:10:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0498bf893ff4ca5c840f9bd85d2a414a351b283489487091a509c21cecdac157",
	                    "EndpointID": "ee61021587d6b272e748b9be55e648c10a614ae79444c0500c2e3f1106e3f44e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-916479",
	                        "f767c4ce93d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479: exit status 2 (340.560458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-916479 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:40 UTC │ 20 Oct 25 12:41 UTC │
	│ image   │ old-k8s-version-384253 image list --format=json                                                                                                                                                                                               │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p old-k8s-version-384253 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-874012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ stop    │ -p newest-cni-916479 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-916479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ newest-cni-916479 image list --format=json                                                                                                                                                                                                    │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ pause   │ -p newest-cni-916479 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-874012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:42:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:42:41.370930  272557 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:42:41.371062  272557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:41.371068  272557 out.go:374] Setting ErrFile to fd 2...
	I1020 12:42:41.371074  272557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:41.371424  272557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:42:41.372542  272557 out.go:368] Setting JSON to false
	I1020 12:42:41.374202  272557 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5110,"bootTime":1760959051,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:42:41.374338  272557 start.go:141] virtualization: kvm guest
	I1020 12:42:41.377800  272557 out.go:179] * [default-k8s-diff-port-874012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:42:41.379503  272557 notify.go:220] Checking for updates...
	I1020 12:42:41.379521  272557 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:42:41.380678  272557 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:42:41.382391  272557 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:41.384387  272557 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:42:41.386004  272557 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:42:41.387520  272557 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:42:41.389956  272557 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:41.390657  272557 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:42:41.422380  272557 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:42:41.422457  272557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:41.495081  272557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:41.482560863 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:41.495210  272557 docker.go:318] overlay module found
	I1020 12:42:41.497143  272557 out.go:179] * Using the docker driver based on existing profile
	I1020 12:42:40.958037  263183 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:42:40.958695  263183 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:40.958749  263183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:42:40.958816  263183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:42:40.999667  263183 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:40.999814  263183 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:42:40.999901  263183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:42:41.007930  263183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:42:41.037583  263183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:42:41.059160  263183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:42:41.105454  263183 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:41.150616  263183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:41.168652  263183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:41.297330  263183 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1020 12:42:41.300190  263183 node_ready.go:35] waiting up to 6m0s for node "embed-certs-907116" to be "Ready" ...
	I1020 12:42:41.564487  263183 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:42:41.498500  272557 start.go:305] selected driver: docker
	I1020 12:42:41.498521  272557 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:41.498640  272557 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:42:41.499419  272557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:41.571904  272557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:41.558952033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:41.572313  272557 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:42:41.572352  272557 cni.go:84] Creating CNI manager for ""
	I1020 12:42:41.572414  272557 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:41.572472  272557 start.go:349] cluster config:
	{Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:41.574172  272557 out.go:179] * Starting "default-k8s-diff-port-874012" primary control-plane node in "default-k8s-diff-port-874012" cluster
	I1020 12:42:41.575514  272557 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:42:41.576843  272557 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:42:41.577895  272557 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:41.577934  272557 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:42:41.577948  272557 cache.go:58] Caching tarball of preloaded images
	I1020 12:42:41.578005  272557 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:42:41.578022  272557 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:42:41.578044  272557 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:42:41.578149  272557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/config.json ...
	I1020 12:42:41.601467  272557 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:42:41.601488  272557 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:42:41.601510  272557 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:42:41.601538  272557 start.go:360] acquireMachinesLock for default-k8s-diff-port-874012: {Name:mk3fe7fe7ce0d8961f5f623b6e43bccc5068bc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:42:41.601604  272557 start.go:364] duration metric: took 43.83µs to acquireMachinesLock for "default-k8s-diff-port-874012"
	I1020 12:42:41.601626  272557 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:42:41.601632  272557 fix.go:54] fixHost starting: 
	I1020 12:42:41.601954  272557 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:42:41.622139  272557 fix.go:112] recreateIfNeeded on default-k8s-diff-port-874012: state=Stopped err=<nil>
	W1020 12:42:41.622203  272557 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.39000327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.394388286Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=76a9aa77-c860-4257-a081-13f951c3ebd7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.395096798Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=05771de5-423b-43e0-aab1-987eac76ed55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.396180534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.396876689Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.397132938Z" level=info msg="Ran pod sandbox d59abef52c970b5fd24a5c106d78221f37ff2609f6d1aa16ffca4c0f9147b3d5 with infra container: kube-system/kindnet-zntlb/POD" id=76a9aa77-c860-4257-a081-13f951c3ebd7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.397766945Z" level=info msg="Ran pod sandbox be84201ce815662d9ae91ae29d8aa9d9b81f7c6c5054232f6d31984386af3f74 with infra container: kube-system/kube-proxy-csrfg/POD" id=05771de5-423b-43e0-aab1-987eac76ed55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.398469384Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ad666ca2-8ae3-4453-9c5a-86ffb10d3044 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.399476327Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=059f98ba-fe8e-4770-a3fa-dfe63dc5604d name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.399727706Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dfd6876d-4607-4004-ae73-aa36e4191cb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.400988997Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ab972e20-9aa9-4f06-b840-30ff3a7256d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.401259193Z" level=info msg="Creating container: kube-system/kindnet-zntlb/kindnet-cni" id=2949919b-76f4-43c7-9a41-6059ede1f293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.401368598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.401990915Z" level=info msg="Creating container: kube-system/kube-proxy-csrfg/kube-proxy" id=c827414e-d761-4570-9f43-06c34a88fe00 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.402131902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.406945374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.40755356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.409767317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.410447489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.43721617Z" level=info msg="Created container a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f: kube-system/kindnet-zntlb/kindnet-cni" id=2949919b-76f4-43c7-9a41-6059ede1f293 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.437919457Z" level=info msg="Starting container: a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f" id=4a010b2d-eece-487c-8a7c-707472facc2a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.44016731Z" level=info msg="Started container" PID=1032 containerID=a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f description=kube-system/kindnet-zntlb/kindnet-cni id=4a010b2d-eece-487c-8a7c-707472facc2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d59abef52c970b5fd24a5c106d78221f37ff2609f6d1aa16ffca4c0f9147b3d5
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.441350115Z" level=info msg="Created container 376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52: kube-system/kube-proxy-csrfg/kube-proxy" id=c827414e-d761-4570-9f43-06c34a88fe00 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.442038262Z" level=info msg="Starting container: 376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52" id=7dfb4868-99cd-4e11-a2e5-5897b5ce786c name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:37 newest-cni-916479 crio[515]: time="2025-10-20T12:42:37.445126291Z" level=info msg="Started container" PID=1033 containerID=376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52 description=kube-system/kube-proxy-csrfg/kube-proxy id=7dfb4868-99cd-4e11-a2e5-5897b5ce786c name=/runtime.v1.RuntimeService/StartContainer sandboxID=be84201ce815662d9ae91ae29d8aa9d9b81f7c6c5054232f6d31984386af3f74
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	376c4310e513c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   be84201ce8156       kube-proxy-csrfg                            kube-system
	a44e701d9cff9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   d59abef52c970       kindnet-zntlb                               kube-system
	34142c2a23f3b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   b61c8f413fdf4       kube-controller-manager-newest-cni-916479   kube-system
	4ed1efea72e54       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   667922a2f6ee3       kube-apiserver-newest-cni-916479            kube-system
	7e4e331dd05d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   40117c94620f7       kube-scheduler-newest-cni-916479            kube-system
	567b442da6356       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   e0588b0b259a9       etcd-newest-cni-916479                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-916479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-916479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=newest-cni-916479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_42_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:42:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-916479
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:42:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 20 Oct 2025 12:42:36 +0000   Mon, 20 Oct 2025 12:42:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-916479
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                451dc7fc-eabb-4f6f-b460-dd1caba110ee
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-916479                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-zntlb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-916479             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-916479    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-csrfg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-916479             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 27s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s   kubelet          Node newest-cni-916479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s   kubelet          Node newest-cni-916479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s   kubelet          Node newest-cni-916479 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s   node-controller  Node newest-cni-916479 event: Registered Node newest-cni-916479 in Controller
	  Normal  RegisteredNode           5s    node-controller  Node newest-cni-916479 event: Registered Node newest-cni-916479 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [567b442da6356bc69fde03e6c53050f8be519b40dfa1b75d724606afef5be74b] <==
	{"level":"warn","ts":"2025-10-20T12:42:35.609258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.614135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.625050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.638017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.642727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.649438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.656821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.666351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.671154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.678939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.693964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.700433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.707850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.716553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.730055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.737984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.746294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.754206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.760595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.768032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.784514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.797158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.804579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.812289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:35.864541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:42:43 up  1:25,  0 user,  load average: 3.44, 3.33, 2.17
	Linux newest-cni-916479 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a44e701d9cff939b12d7bb214628681bd95e3c111390bf3b7f97781629394c3f] <==
	I1020 12:42:37.660590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:42:37.660866       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 12:42:37.660998       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:42:37.661020       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:42:37.661117       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:42:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:42:37.940439       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:42:37.940476       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:42:37.940484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:42:37.940601       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:42:38.441196       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:42:38.441233       1 metrics.go:72] Registering metrics
	I1020 12:42:38.441310       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [4ed1efea72e5487e5e59517d90f26b5f62c3ef4e40854d57c12937d555a638b3] <==
	I1020 12:42:36.381420       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1020 12:42:36.381278       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:42:36.381298       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:42:36.383263       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:42:36.383160       1 aggregator.go:171] initial CRD sync complete...
	I1020 12:42:36.383860       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 12:42:36.383883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:42:36.383891       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:42:36.390974       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:42:36.391153       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:36.392203       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:42:36.399117       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:42:36.399198       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:42:36.430188       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:42:36.823807       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:42:36.857340       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:42:36.882370       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:42:36.894354       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:42:36.904918       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:42:36.943859       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.204.112"}
	I1020 12:42:36.954068       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.105.99"}
	I1020 12:42:37.282119       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:42:39.238487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:42:39.339353       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:42:39.389030       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34142c2a23f3beb7546b5829b045f4533e20cc9b20ce254c5c913f26c1392585] <==
	I1020 12:42:38.935221       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:42:38.935245       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:42:38.935260       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 12:42:38.935398       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:42:38.935508       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:42:38.935598       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-916479"
	I1020 12:42:38.935687       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 12:42:38.935794       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 12:42:38.935856       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 12:42:38.935896       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 12:42:38.936007       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:42:38.936010       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:42:38.937195       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:42:38.939333       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 12:42:38.941837       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:42:38.941953       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1020 12:42:38.942020       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1020 12:42:38.942074       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1020 12:42:38.942083       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 12:42:38.942096       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 12:42:38.944228       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:42:38.946403       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:42:38.948620       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:38.963826       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:42:38.967158       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [376c4310e513c93fb27da3ecf5a92c92a7d091aff4be289f7efcbb5ef4529e52] <==
	I1020 12:42:37.484484       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:42:37.538225       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:42:37.638370       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:42:37.638415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 12:42:37.638518       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:42:37.658294       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:42:37.658354       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:42:37.664317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:42:37.664641       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:42:37.664668       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:37.668338       1 config.go:200] "Starting service config controller"
	I1020 12:42:37.668364       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:42:37.668371       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:42:37.668379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:42:37.668436       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:42:37.668465       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:42:37.669020       1 config.go:309] "Starting node config controller"
	I1020 12:42:37.669077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:42:37.669087       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:42:37.768616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:42:37.768616       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:42:37.770227       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7e4e331dd05d4a1c40b62eb1b442db845333bff25245997d0a175d9d9ef8fd1a] <==
	I1020 12:42:35.054055       1 serving.go:386] Generated self-signed cert in-memory
	W1020 12:42:36.308294       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 12:42:36.308333       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 12:42:36.308344       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 12:42:36.308354       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 12:42:36.395616       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:42:36.395727       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:36.399592       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:36.399650       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:36.400622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:42:36.400704       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:42:36.499919       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.132876     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-916479\" not found" node="newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.386150     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.407407     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-916479\" already exists" pod="kube-system/kube-scheduler-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.407455     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.421465     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-916479\" already exists" pod="kube-system/etcd-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.421516     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.430333     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-916479\" already exists" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.430375     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: E1020 12:42:36.438679     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-916479\" already exists" pod="kube-system/kube-controller-manager-newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.461288     661 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.461705     661 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-916479"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.461944     661 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 20 12:42:36 newest-cni-916479 kubelet[661]: I1020 12:42:36.463738     661 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.078104     661 apiserver.go:52] "Watching apiserver"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.083927     661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.133435     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: E1020 12:42:37.140137     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-916479\" already exists" pod="kube-system/kube-apiserver-newest-cni-916479"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180444     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-lib-modules\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180501     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-cni-cfg\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180658     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-xtables-lock\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180700     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c49499e7-f553-4426-9c41-b6e9c93c1ee1-lib-modules\") pod \"kindnet-zntlb\" (UID: \"c49499e7-f553-4426-9c41-b6e9c93c1ee1\") " pod="kube-system/kindnet-zntlb"
	Oct 20 12:42:37 newest-cni-916479 kubelet[661]: I1020 12:42:37.180731     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a56ea05-0dbb-4a3a-8510-14d952a4f69b-xtables-lock\") pod \"kube-proxy-csrfg\" (UID: \"2a56ea05-0dbb-4a3a-8510-14d952a4f69b\") " pod="kube-system/kube-proxy-csrfg"
	Oct 20 12:42:38 newest-cni-916479 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:42:38 newest-cni-916479 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:42:38 newest-cni-916479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-916479 -n newest-cni-916479
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-916479 -n newest-cni-916479: exit status 2 (324.549564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-916479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9: exit status 1 (64.045849ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-vzfdm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-cftgj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wkch9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-916479 describe pod coredns-66bc5c9577-vzfdm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cftgj kubernetes-dashboard-855c9754f9-wkch9: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-907116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-907116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.165683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-907116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-907116 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-907116 describe deploy/metrics-server -n kube-system: exit status 1 (72.697681ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-907116 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-907116
helpers_test.go:243: (dbg) docker inspect embed-certs-907116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff",
	        "Created": "2025-10-20T12:42:20.232246368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:42:20.273269867Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/hosts",
	        "LogPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff-json.log",
	        "Name": "/embed-certs-907116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-907116:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-907116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff",
	                "LowerDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-907116",
	                "Source": "/var/lib/docker/volumes/embed-certs-907116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-907116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-907116",
	                "name.minikube.sigs.k8s.io": "embed-certs-907116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "feb97e34b63b08bf14d1cad6d461326b6387f9de70241d42f202dd00859ad519",
	            "SandboxKey": "/var/run/docker/netns/feb97e34b63b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-907116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:11:bb:60:bd:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e327fc0cc35f5e99ec36d310a3ce8c7214de7f81deb736225deef68fe8ea58b",
	                    "EndpointID": "b25eddecefc6d9699112f3d80dee091cd783750ca91e87acb99d154f1a268433",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-907116",
	                        "dde9a162828e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-907116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-907116 logs -n 25: (1.173625448s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-384253                                                                                                                                                                                                                     │ old-k8s-version-384253       │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ no-preload-649841 image list --format=json                                                                                                                                                                                                    │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ pause   │ -p no-preload-649841 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ delete  │ -p no-preload-649841                                                                                                                                                                                                                          │ no-preload-649841            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:41 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-874012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ stop    │ -p newest-cni-916479 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-916479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ newest-cni-916479 image list --format=json                                                                                                                                                                                                    │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ pause   │ -p newest-cni-916479 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-874012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ delete  │ -p newest-cni-916479                                                                                                                                                                                                                          │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p newest-cni-916479                                                                                                                                                                                                                          │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p auto-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-312375                  │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-907116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:42:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:42:47.239973  275397 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:42:47.240204  275397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:47.240213  275397 out.go:374] Setting ErrFile to fd 2...
	I1020 12:42:47.240217  275397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:42:47.240423  275397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:42:47.240952  275397 out.go:368] Setting JSON to false
	I1020 12:42:47.242110  275397 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5116,"bootTime":1760959051,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:42:47.242197  275397 start.go:141] virtualization: kvm guest
	I1020 12:42:47.244397  275397 out.go:179] * [auto-312375] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:42:47.245948  275397 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:42:47.245965  275397 notify.go:220] Checking for updates...
	I1020 12:42:47.248722  275397 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:42:47.250405  275397 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:47.251918  275397 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:42:47.253478  275397 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:42:47.254871  275397 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:42:47.256685  275397 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:47.256795  275397 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:47.256889  275397 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:47.256992  275397 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:42:47.284666  275397 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:42:47.284786  275397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:47.353400  275397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:47.342058165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:47.353499  275397 docker.go:318] overlay module found
	I1020 12:42:47.356905  275397 out.go:179] * Using the docker driver based on user configuration
	I1020 12:42:47.358351  275397 start.go:305] selected driver: docker
	I1020 12:42:47.358371  275397 start.go:925] validating driver "docker" against <nil>
	I1020 12:42:47.358383  275397 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:42:47.358996  275397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:42:47.424524  275397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-20 12:42:47.412762199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:42:47.424739  275397 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:42:47.425076  275397 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:42:47.428594  275397 out.go:179] * Using Docker driver with root privileges
	I1020 12:42:47.430050  275397 cni.go:84] Creating CNI manager for ""
	I1020 12:42:47.430133  275397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:47.430145  275397 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:42:47.430251  275397 start.go:349] cluster config:
	{Name:auto-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1020 12:42:47.431721  275397 out.go:179] * Starting "auto-312375" primary control-plane node in "auto-312375" cluster
	I1020 12:42:47.433020  275397 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:42:47.434316  275397 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:42:47.435723  275397 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:47.435787  275397 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:42:47.435797  275397 cache.go:58] Caching tarball of preloaded images
	I1020 12:42:47.435859  275397 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:42:47.435923  275397 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:42:47.435938  275397 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:42:47.436064  275397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/config.json ...
	I1020 12:42:47.436090  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/config.json: {Name:mkde9082f709c38dee06b58a5e82a355bcee11af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:47.458250  275397 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:42:47.458281  275397 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:42:47.458302  275397 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:42:47.458332  275397 start.go:360] acquireMachinesLock for auto-312375: {Name:mk4c9ca4c591ace6af3837f8add3381e0161bf3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:42:47.458449  275397 start.go:364] duration metric: took 96.033µs to acquireMachinesLock for "auto-312375"
	I1020 12:42:47.458477  275397 start.go:93] Provisioning new machine with config: &{Name:auto-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-312375 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:47.458584  275397 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:42:46.492651  272557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:42:46.492678  272557 machine.go:96] duration metric: took 4.535357773s to provisionDockerMachine
	I1020 12:42:46.492691  272557 start.go:293] postStartSetup for "default-k8s-diff-port-874012" (driver="docker")
	I1020 12:42:46.492725  272557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:42:46.492843  272557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:42:46.492902  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:46.516205  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:46.619578  272557 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:42:46.623363  272557 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:42:46.623398  272557 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:42:46.623412  272557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:42:46.623463  272557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:42:46.623533  272557 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:42:46.623660  272557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:42:46.631972  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:42:46.650176  272557 start.go:296] duration metric: took 157.450557ms for postStartSetup
	I1020 12:42:46.650277  272557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:42:46.650321  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:46.669124  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:46.767133  272557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:42:46.772082  272557 fix.go:56] duration metric: took 5.170436609s for fixHost
	I1020 12:42:46.772114  272557 start.go:83] releasing machines lock for "default-k8s-diff-port-874012", held for 5.170495924s
	I1020 12:42:46.772183  272557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-874012
	I1020 12:42:46.792977  272557 ssh_runner.go:195] Run: cat /version.json
	I1020 12:42:46.793029  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:46.793045  272557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:42:46.793115  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:46.814691  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:46.815041  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:46.912410  272557 ssh_runner.go:195] Run: systemctl --version
	I1020 12:42:46.974063  272557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:42:47.012480  272557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:42:47.017834  272557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:42:47.017902  272557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:42:47.027414  272557 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:42:47.027440  272557 start.go:495] detecting cgroup driver to use...
	I1020 12:42:47.027476  272557 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:42:47.027527  272557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:42:47.043363  272557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:42:47.057258  272557 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:42:47.057314  272557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:42:47.074412  272557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:42:47.087808  272557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:42:47.177811  272557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:42:47.263508  272557 docker.go:234] disabling docker service ...
	I1020 12:42:47.263574  272557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:42:47.280809  272557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:42:47.295308  272557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:42:47.394974  272557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:42:47.491446  272557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:42:47.505948  272557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:42:47.523031  272557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:42:47.523087  272557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.534094  272557 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:42:47.534156  272557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.544158  272557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.555089  272557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.565555  272557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:42:47.574715  272557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.584433  272557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.594289  272557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:47.603933  272557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:42:47.612839  272557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:42:47.621432  272557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:47.722757  272557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:42:47.854741  272557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:42:47.854851  272557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:42:47.859169  272557 start.go:563] Will wait 60s for crictl version
	I1020 12:42:47.859231  272557 ssh_runner.go:195] Run: which crictl
	I1020 12:42:47.862900  272557 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:42:47.890428  272557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:42:47.890510  272557 ssh_runner.go:195] Run: crio --version
	I1020 12:42:47.923888  272557 ssh_runner.go:195] Run: crio --version
	I1020 12:42:47.958827  272557 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:42:47.960331  272557 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-874012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:42:47.980973  272557 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1020 12:42:47.985862  272557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:42:48.014041  272557 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:42:48.014204  272557 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:48.014270  272557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:42:48.053447  272557 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:42:48.053469  272557 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:42:48.053514  272557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:42:48.082510  272557 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:42:48.082533  272557 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:42:48.082539  272557 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1020 12:42:48.082635  272557 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-874012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:42:48.082692  272557 ssh_runner.go:195] Run: crio config
	I1020 12:42:48.137068  272557 cni.go:84] Creating CNI manager for ""
	I1020 12:42:48.137096  272557 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:48.137119  272557 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:42:48.137148  272557 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-874012 NodeName:default-k8s-diff-port-874012 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:42:48.137300  272557 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-874012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:42:48.137360  272557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:42:48.147289  272557 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:42:48.147373  272557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:42:48.156578  272557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1020 12:42:48.170873  272557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:42:48.184898  272557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1020 12:42:48.199099  272557 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:42:48.205345  272557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:42:48.222281  272557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:48.322453  272557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:48.343797  272557 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012 for IP: 192.168.103.2
	I1020 12:42:48.343820  272557 certs.go:195] generating shared ca certs ...
	I1020 12:42:48.343841  272557 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:48.343995  272557 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:42:48.344037  272557 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:42:48.344046  272557 certs.go:257] generating profile certs ...
	I1020 12:42:48.344154  272557 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/client.key
	I1020 12:42:48.344224  272557 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key.fa6ae681
	I1020 12:42:48.344277  272557 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key
	I1020 12:42:48.344412  272557 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:42:48.344450  272557 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:42:48.344465  272557 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:42:48.344500  272557 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:42:48.344604  272557 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:42:48.344693  272557 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:42:48.344759  272557 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:42:48.345576  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:42:48.367094  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:42:48.388266  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:42:48.409254  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:42:48.432973  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1020 12:42:48.458244  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:42:48.478062  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:42:48.497022  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/default-k8s-diff-port-874012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:42:48.517562  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:42:48.538043  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:42:48.559706  272557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:42:48.579993  272557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:42:48.594400  272557 ssh_runner.go:195] Run: openssl version
	I1020 12:42:48.600968  272557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:42:48.610542  272557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:48.614886  272557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:48.614965  272557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:48.652378  272557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:42:48.662148  272557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:42:48.674583  272557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:42:48.679027  272557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:42:48.679081  272557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:42:48.715177  272557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:42:48.724389  272557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:42:48.734332  272557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:42:48.738881  272557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:42:48.738946  272557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:42:48.785624  272557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:42:48.794972  272557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:42:48.799432  272557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:42:48.836511  272557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:42:48.877141  272557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:42:48.922841  272557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:42:48.985128  272557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:42:49.044472  272557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:42:49.104630  272557 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-874012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-874012 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:49.104736  272557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:42:49.104841  272557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:42:49.142242  272557 cri.go:89] found id: "950cf2bcf663da8ddc81ce889407cc48e3d12e5e1bd9be508b2b13a09017120c"
	I1020 12:42:49.142265  272557 cri.go:89] found id: "361bbce2ef1dab79033c19296471736ded91254dc81373034fb69f4e8ab8a98c"
	I1020 12:42:49.142277  272557 cri.go:89] found id: "4701f0f003c887f114d5da2a88fc8b6767f57ea38df31b2ec658e6f9e2ca07df"
	I1020 12:42:49.142282  272557 cri.go:89] found id: "7c78acc071dce4799d081c9cd84fb7f3990161652fd814c617b6d088840d020a"
	I1020 12:42:49.142286  272557 cri.go:89] found id: ""
	I1020 12:42:49.142331  272557 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:42:49.160092  272557 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:42:49Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:42:49.160157  272557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:42:49.171357  272557 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:42:49.171395  272557 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:42:49.171449  272557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:42:49.182418  272557 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:42:49.183291  272557 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-874012" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:49.184058  272557 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-874012" cluster setting kubeconfig missing "default-k8s-diff-port-874012" context setting]
	I1020 12:42:49.185309  272557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:49.187667  272557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:42:49.198715  272557 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1020 12:42:49.198757  272557 kubeadm.go:601] duration metric: took 27.355505ms to restartPrimaryControlPlane
	I1020 12:42:49.198783  272557 kubeadm.go:402] duration metric: took 94.149268ms to StartCluster
	I1020 12:42:49.198804  272557 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:49.198883  272557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:42:49.200959  272557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:49.201240  272557 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:42:49.201641  272557 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:42:49.201735  272557 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-874012"
	I1020 12:42:49.201757  272557 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-874012"
	W1020 12:42:49.201766  272557 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:42:49.201763  272557 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-874012"
	I1020 12:42:49.201796  272557 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-874012"
	W1020 12:42:49.201805  272557 addons.go:247] addon dashboard should already be in state true
	I1020 12:42:49.201811  272557 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:49.201826  272557 host.go:66] Checking if "default-k8s-diff-port-874012" exists ...
	I1020 12:42:49.201837  272557 host.go:66] Checking if "default-k8s-diff-port-874012" exists ...
	I1020 12:42:49.201850  272557 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-874012"
	I1020 12:42:49.201864  272557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-874012"
	I1020 12:42:49.202153  272557 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:42:49.202345  272557 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:42:49.202958  272557 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:42:49.205923  272557 out.go:179] * Verifying Kubernetes components...
	I1020 12:42:49.207564  272557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:49.243683  272557 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:42:49.246007  272557 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:49.246026  272557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:42:49.246083  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:49.246463  272557 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-874012"
	W1020 12:42:49.246555  272557 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:42:49.246594  272557 host.go:66] Checking if "default-k8s-diff-port-874012" exists ...
	I1020 12:42:49.247202  272557 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:42:49.249423  272557 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 12:42:49.251887  272557 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1020 12:42:45.304252  263183 node_ready.go:57] node "embed-certs-907116" has "Ready":"False" status (will retry)
	W1020 12:42:47.803477  263183 node_ready.go:57] node "embed-certs-907116" has "Ready":"False" status (will retry)
	I1020 12:42:46.166606  236655 cri.go:89] found id: ""
	I1020 12:42:46.166629  236655 logs.go:282] 0 containers: []
	W1020 12:42:46.166641  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:46.166648  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:46.166703  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:46.196137  236655 cri.go:89] found id: ""
	I1020 12:42:46.196168  236655 logs.go:282] 0 containers: []
	W1020 12:42:46.196179  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:46.196189  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:46.196205  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:46.212404  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:46.212436  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:46.275290  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:46.275316  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:46.275332  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:46.312901  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:46.312932  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:46.383798  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:46.383835  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:46.417374  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:46.417407  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:46.484411  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:46.484455  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:46.522928  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:46.522959  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:49.116317  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:49.116789  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:42:49.116849  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:49.116902  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:49.151549  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:49.151573  236655 cri.go:89] found id: ""
	I1020 12:42:49.151583  236655 logs.go:282] 1 containers: [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:49.151635  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:49.156973  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:49.157060  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:49.193854  236655 cri.go:89] found id: ""
	I1020 12:42:49.193931  236655 logs.go:282] 0 containers: []
	W1020 12:42:49.193956  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:49.193975  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:49.194057  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:49.238899  236655 cri.go:89] found id: ""
	I1020 12:42:49.238933  236655 logs.go:282] 0 containers: []
	W1020 12:42:49.238946  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:49.238956  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:49.239036  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:49.294099  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:49.294126  236655 cri.go:89] found id: ""
	I1020 12:42:49.294137  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:49.294194  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:49.299215  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:49.299290  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:49.344702  236655 cri.go:89] found id: ""
	I1020 12:42:49.344731  236655 logs.go:282] 0 containers: []
	W1020 12:42:49.344742  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:49.344750  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:49.344820  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:49.380448  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:49.380468  236655 cri.go:89] found id: ""
	I1020 12:42:49.380477  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:49.380542  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:49.384832  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:49.384907  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:49.419520  236655 cri.go:89] found id: ""
	I1020 12:42:49.419546  236655 logs.go:282] 0 containers: []
	W1020 12:42:49.419555  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:49.419561  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:49.419617  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:49.461314  236655 cri.go:89] found id: ""
	I1020 12:42:49.461344  236655 logs.go:282] 0 containers: []
	W1020 12:42:49.461355  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:49.461366  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:49.461402  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:49.489058  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:49.489099  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:49.566196  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:42:49.566220  236655 logs.go:123] Gathering logs for kube-apiserver [badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a] ...
	I1020 12:42:49.566238  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:49.607187  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:49.607223  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:49.682268  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:42:49.682305  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:49.714184  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:42:49.714222  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:42:49.784626  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:49.784660  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:49.821599  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:49.821629  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:49.254445  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 12:42:49.254512  272557 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 12:42:49.254615  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:49.279884  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:49.285152  272557 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:49.285188  272557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:42:49.285249  272557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:42:49.286857  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:49.313287  272557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:42:49.385992  272557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:49.403679  272557 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-874012" to be "Ready" ...
	I1020 12:42:49.411592  272557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:42:49.412893  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 12:42:49.412915  272557 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 12:42:49.433270  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 12:42:49.433300  272557 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 12:42:49.437726  272557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:42:49.454381  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 12:42:49.454423  272557 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 12:42:49.488063  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 12:42:49.488088  272557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 12:42:49.506581  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 12:42:49.506604  272557 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 12:42:49.524293  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 12:42:49.524317  272557 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 12:42:49.539246  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 12:42:49.539270  272557 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 12:42:49.555617  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 12:42:49.555696  272557 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 12:42:49.573131  272557 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:42:49.573160  272557 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 12:42:49.588212  272557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:42:50.975409  272557 node_ready.go:49] node "default-k8s-diff-port-874012" is "Ready"
	I1020 12:42:50.975447  272557 node_ready.go:38] duration metric: took 1.571717595s for node "default-k8s-diff-port-874012" to be "Ready" ...
	I1020 12:42:50.975463  272557 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:42:50.975524  272557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:42:47.460842  275397 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 12:42:47.461068  275397 start.go:159] libmachine.API.Create for "auto-312375" (driver="docker")
	I1020 12:42:47.461101  275397 client.go:168] LocalClient.Create starting
	I1020 12:42:47.461222  275397 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem
	I1020 12:42:47.461264  275397 main.go:141] libmachine: Decoding PEM data...
	I1020 12:42:47.461287  275397 main.go:141] libmachine: Parsing certificate...
	I1020 12:42:47.461382  275397 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem
	I1020 12:42:47.461415  275397 main.go:141] libmachine: Decoding PEM data...
	I1020 12:42:47.461432  275397 main.go:141] libmachine: Parsing certificate...
	I1020 12:42:47.461902  275397 cli_runner.go:164] Run: docker network inspect auto-312375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:42:47.484674  275397 cli_runner.go:211] docker network inspect auto-312375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:42:47.484739  275397 network_create.go:284] running [docker network inspect auto-312375] to gather additional debugging logs...
	I1020 12:42:47.484763  275397 cli_runner.go:164] Run: docker network inspect auto-312375
	W1020 12:42:47.504612  275397 cli_runner.go:211] docker network inspect auto-312375 returned with exit code 1
	I1020 12:42:47.504647  275397 network_create.go:287] error running [docker network inspect auto-312375]: docker network inspect auto-312375: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-312375 not found
	I1020 12:42:47.504684  275397 network_create.go:289] output of [docker network inspect auto-312375]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-312375 not found
	
	** /stderr **
	I1020 12:42:47.504829  275397 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:42:47.524579  275397 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
	I1020 12:42:47.525308  275397 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b260bb82684 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:b2:17:6a:60:46} reservation:<nil>}
	I1020 12:42:47.526034  275397 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-db577f5b7d12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:85:a2:b7:03:1d} reservation:<nil>}
	I1020 12:42:47.526592  275397 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4e327fc0cc35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:0a:0b:7b:29:bc} reservation:<nil>}
	I1020 12:42:47.527370  275397 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f14c20}
	I1020 12:42:47.527392  275397 network_create.go:124] attempt to create docker network auto-312375 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1020 12:42:47.527441  275397 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-312375 auto-312375
	I1020 12:42:47.593624  275397 network_create.go:108] docker network auto-312375 192.168.85.0/24 created
	I1020 12:42:47.593655  275397 kic.go:121] calculated static IP "192.168.85.2" for the "auto-312375" container
	I1020 12:42:47.593750  275397 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:42:47.613635  275397 cli_runner.go:164] Run: docker volume create auto-312375 --label name.minikube.sigs.k8s.io=auto-312375 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:42:47.634376  275397 oci.go:103] Successfully created a docker volume auto-312375
	I1020 12:42:47.634498  275397 cli_runner.go:164] Run: docker run --rm --name auto-312375-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-312375 --entrypoint /usr/bin/test -v auto-312375:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:42:48.064925  275397 oci.go:107] Successfully prepared a docker volume auto-312375
	I1020 12:42:48.064993  275397 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:48.065028  275397 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:42:48.065107  275397 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-312375:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:42:52.114296  272557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.702661604s)
	I1020 12:42:52.114340  272557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.67657964s)
	I1020 12:42:53.028886  272557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.440613948s)
	I1020 12:42:53.028899  272557 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.053353671s)
	I1020 12:42:53.028929  272557 api_server.go:72] duration metric: took 3.827657449s to wait for apiserver process to appear ...
	I1020 12:42:53.028937  272557 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:42:53.028957  272557 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1020 12:42:53.031401  272557 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-874012 addons enable metrics-server
	
	I1020 12:42:53.032972  272557 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1020 12:42:49.804396  263183 node_ready.go:57] node "embed-certs-907116" has "Ready":"False" status (will retry)
	I1020 12:42:52.303547  263183 node_ready.go:49] node "embed-certs-907116" is "Ready"
	I1020 12:42:52.303614  263183 node_ready.go:38] duration metric: took 11.003348171s for node "embed-certs-907116" to be "Ready" ...
	I1020 12:42:52.303632  263183 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:42:52.303686  263183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:42:52.318088  263183 api_server.go:72] duration metric: took 11.397232293s to wait for apiserver process to appear ...
	I1020 12:42:52.318115  263183 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:42:52.318131  263183 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:42:52.581810  263183 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 12:42:52.583004  263183 api_server.go:141] control plane version: v1.34.1
	I1020 12:42:52.583038  263183 api_server.go:131] duration metric: took 264.916862ms to wait for apiserver health ...
	I1020 12:42:52.583051  263183 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:42:52.609617  263183 system_pods.go:59] 8 kube-system pods found
	I1020 12:42:52.609658  263183 system_pods.go:61] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Pending
	I1020 12:42:52.609667  263183 system_pods.go:61] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running
	I1020 12:42:52.609672  263183 system_pods.go:61] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:42:52.609678  263183 system_pods.go:61] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running
	I1020 12:42:52.609683  263183 system_pods.go:61] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running
	I1020 12:42:52.609688  263183 system_pods.go:61] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:42:52.609696  263183 system_pods.go:61] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running
	I1020 12:42:52.609700  263183 system_pods.go:61] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Pending
	I1020 12:42:52.609708  263183 system_pods.go:74] duration metric: took 26.650043ms to wait for pod list to return data ...
	I1020 12:42:52.609719  263183 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:42:52.626545  263183 default_sa.go:45] found service account: "default"
	I1020 12:42:52.626577  263183 default_sa.go:55] duration metric: took 16.85028ms for default service account to be created ...
	I1020 12:42:52.626590  263183 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:42:52.669900  263183 system_pods.go:86] 8 kube-system pods found
	I1020 12:42:52.669935  263183 system_pods.go:89] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Pending
	I1020 12:42:52.669943  263183 system_pods.go:89] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running
	I1020 12:42:52.669949  263183 system_pods.go:89] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:42:52.669954  263183 system_pods.go:89] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running
	I1020 12:42:52.669959  263183 system_pods.go:89] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running
	I1020 12:42:52.669963  263183 system_pods.go:89] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:42:52.669968  263183 system_pods.go:89] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running
	I1020 12:42:52.669980  263183 system_pods.go:89] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:42:52.670005  263183 retry.go:31] will retry after 212.321752ms: missing components: kube-dns
	I1020 12:42:52.989991  263183 system_pods.go:86] 8 kube-system pods found
	I1020 12:42:52.990031  263183 system_pods.go:89] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:42:52.990039  263183 system_pods.go:89] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running
	I1020 12:42:52.990046  263183 system_pods.go:89] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:42:52.990051  263183 system_pods.go:89] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running
	I1020 12:42:52.990057  263183 system_pods.go:89] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running
	I1020 12:42:52.990062  263183 system_pods.go:89] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:42:52.990067  263183 system_pods.go:89] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running
	I1020 12:42:52.990073  263183 system_pods.go:89] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:42:52.990100  263183 retry.go:31] will retry after 336.785883ms: missing components: kube-dns
	I1020 12:42:53.333454  263183 system_pods.go:86] 8 kube-system pods found
	I1020 12:42:53.333494  263183 system_pods.go:89] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:42:53.333503  263183 system_pods.go:89] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running
	I1020 12:42:53.333510  263183 system_pods.go:89] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:42:53.333516  263183 system_pods.go:89] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running
	I1020 12:42:53.333524  263183 system_pods.go:89] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running
	I1020 12:42:53.333529  263183 system_pods.go:89] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:42:53.333534  263183 system_pods.go:89] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running
	I1020 12:42:53.333541  263183 system_pods.go:89] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:42:53.333559  263183 retry.go:31] will retry after 442.773516ms: missing components: kube-dns
	I1020 12:42:53.782537  263183 system_pods.go:86] 8 kube-system pods found
	I1020 12:42:53.782569  263183 system_pods.go:89] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Running
	I1020 12:42:53.782578  263183 system_pods.go:89] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running
	I1020 12:42:53.782583  263183 system_pods.go:89] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:42:53.782588  263183 system_pods.go:89] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running
	I1020 12:42:53.782596  263183 system_pods.go:89] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running
	I1020 12:42:53.782601  263183 system_pods.go:89] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:42:53.782606  263183 system_pods.go:89] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running
	I1020 12:42:53.782611  263183 system_pods.go:89] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Running
	I1020 12:42:53.782621  263183 system_pods.go:126] duration metric: took 1.1560241s to wait for k8s-apps to be running ...
	I1020 12:42:53.782631  263183 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:42:53.782679  263183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:42:53.798875  263183 system_svc.go:56] duration metric: took 16.235571ms WaitForService to wait for kubelet
	I1020 12:42:53.798944  263183 kubeadm.go:586] duration metric: took 12.878091031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:42:53.798981  263183 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:42:53.802212  263183 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:42:53.802239  263183 node_conditions.go:123] node cpu capacity is 8
	I1020 12:42:53.802256  263183 node_conditions.go:105] duration metric: took 3.266101ms to run NodePressure ...
	I1020 12:42:53.802271  263183 start.go:241] waiting for startup goroutines ...
	I1020 12:42:53.802281  263183 start.go:246] waiting for cluster config update ...
	I1020 12:42:53.802314  263183 start.go:255] writing updated cluster config ...
	I1020 12:42:53.802614  263183 ssh_runner.go:195] Run: rm -f paused
	I1020 12:42:53.808960  263183 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:42:53.813705  263183 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vpzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:53.818871  263183 pod_ready.go:94] pod "coredns-66bc5c9577-vpzk5" is "Ready"
	I1020 12:42:53.818897  263183 pod_ready.go:86] duration metric: took 5.164935ms for pod "coredns-66bc5c9577-vpzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:53.820917  263183 pod_ready.go:83] waiting for pod "etcd-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:53.825302  263183 pod_ready.go:94] pod "etcd-embed-certs-907116" is "Ready"
	I1020 12:42:53.825327  263183 pod_ready.go:86] duration metric: took 4.390196ms for pod "etcd-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:53.827517  263183 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:53.831895  263183 pod_ready.go:94] pod "kube-apiserver-embed-certs-907116" is "Ready"
	I1020 12:42:53.831917  263183 pod_ready.go:86] duration metric: took 4.380316ms for pod "kube-apiserver-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:53.833955  263183 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:54.213972  263183 pod_ready.go:94] pod "kube-controller-manager-embed-certs-907116" is "Ready"
	I1020 12:42:54.214001  263183 pod_ready.go:86] duration metric: took 380.026665ms for pod "kube-controller-manager-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:54.414167  263183 pod_ready.go:83] waiting for pod "kube-proxy-s2xbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:54.814034  263183 pod_ready.go:94] pod "kube-proxy-s2xbv" is "Ready"
	I1020 12:42:54.814079  263183 pod_ready.go:86] duration metric: took 399.886107ms for pod "kube-proxy-s2xbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:55.013866  263183 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:55.413311  263183 pod_ready.go:94] pod "kube-scheduler-embed-certs-907116" is "Ready"
	I1020 12:42:55.413337  263183 pod_ready.go:86] duration metric: took 399.437253ms for pod "kube-scheduler-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:42:55.413363  263183 pod_ready.go:40] duration metric: took 1.604360016s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:42:55.465756  263183 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:42:55.467998  263183 out.go:179] * Done! kubectl is now configured to use "embed-certs-907116" cluster and "default" namespace by default
	I1020 12:42:52.470955  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:42:53.034065  272557 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:42:53.034093  272557 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:42:53.034488  272557 addons.go:514] duration metric: took 3.832845601s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 12:42:53.529931  272557 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1020 12:42:53.536224  272557 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1020 12:42:53.537160  272557 api_server.go:141] control plane version: v1.34.1
	I1020 12:42:53.537183  272557 api_server.go:131] duration metric: took 508.239226ms to wait for apiserver health ...
	I1020 12:42:53.537191  272557 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:42:53.540929  272557 system_pods.go:59] 8 kube-system pods found
	I1020 12:42:53.540986  272557 system_pods.go:61] "coredns-66bc5c9577-vd5sd" [72e24caa-a3c3-45b6-bcf6-42b600c08fce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:42:53.540999  272557 system_pods.go:61] "etcd-default-k8s-diff-port-874012" [abddfbb2-07a1-4f97-8c67-98dd0e0b67b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:42:53.541014  272557 system_pods.go:61] "kindnet-jrv62" [0e844105-d285-40a8-8cf7-30221c1e2034] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1020 12:42:53.541022  272557 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-874012" [54f536d7-7e24-4add-87c0-2710fd650613] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:42:53.541099  272557 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-874012" [907b4deb-59a9-488b-86e0-e268e8f0e623] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:42:53.541109  272557 system_pods.go:61] "kube-proxy-bbw6k" [5fc9fff8-30ab-4d81-868c-9d06b36040de] Running
	I1020 12:42:53.541118  272557 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-874012" [d58a24ee-f2ff-4414-9cad-bb938092650f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:42:53.541123  272557 system_pods.go:61] "storage-provisioner" [c07250e9-4c89-414f-94b6-af63b9e5d71d] Running
	I1020 12:42:53.541140  272557 system_pods.go:74] duration metric: took 3.934017ms to wait for pod list to return data ...
	I1020 12:42:53.541149  272557 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:42:53.543996  272557 default_sa.go:45] found service account: "default"
	I1020 12:42:53.544015  272557 default_sa.go:55] duration metric: took 2.859276ms for default service account to be created ...
	I1020 12:42:53.544026  272557 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:42:53.546858  272557 system_pods.go:86] 8 kube-system pods found
	I1020 12:42:53.546889  272557 system_pods.go:89] "coredns-66bc5c9577-vd5sd" [72e24caa-a3c3-45b6-bcf6-42b600c08fce] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:42:53.546902  272557 system_pods.go:89] "etcd-default-k8s-diff-port-874012" [abddfbb2-07a1-4f97-8c67-98dd0e0b67b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:42:53.546912  272557 system_pods.go:89] "kindnet-jrv62" [0e844105-d285-40a8-8cf7-30221c1e2034] Running
	I1020 12:42:53.546930  272557 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-874012" [54f536d7-7e24-4add-87c0-2710fd650613] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:42:53.546942  272557 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-874012" [907b4deb-59a9-488b-86e0-e268e8f0e623] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:42:53.546951  272557 system_pods.go:89] "kube-proxy-bbw6k" [5fc9fff8-30ab-4d81-868c-9d06b36040de] Running
	I1020 12:42:53.546959  272557 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-874012" [d58a24ee-f2ff-4414-9cad-bb938092650f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:42:53.546967  272557 system_pods.go:89] "storage-provisioner" [c07250e9-4c89-414f-94b6-af63b9e5d71d] Running
	I1020 12:42:53.546977  272557 system_pods.go:126] duration metric: took 2.94386ms to wait for k8s-apps to be running ...
	I1020 12:42:53.546990  272557 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:42:53.547038  272557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:42:53.562928  272557 system_svc.go:56] duration metric: took 15.931335ms WaitForService to wait for kubelet
	I1020 12:42:53.562958  272557 kubeadm.go:586] duration metric: took 4.361684609s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:42:53.562979  272557 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:42:53.566168  272557 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:42:53.566197  272557 node_conditions.go:123] node cpu capacity is 8
	I1020 12:42:53.566212  272557 node_conditions.go:105] duration metric: took 3.227205ms to run NodePressure ...
	I1020 12:42:53.566227  272557 start.go:241] waiting for startup goroutines ...
	I1020 12:42:53.566242  272557 start.go:246] waiting for cluster config update ...
	I1020 12:42:53.566259  272557 start.go:255] writing updated cluster config ...
	I1020 12:42:53.566568  272557 ssh_runner.go:195] Run: rm -f paused
	I1020 12:42:53.570867  272557 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:42:53.575247  272557 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vd5sd" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:42:55.581165  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	I1020 12:42:53.022455  275397 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-312375:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.957290435s)
	I1020 12:42:53.022494  275397 kic.go:203] duration metric: took 4.957462842s to extract preloaded images to volume ...
	W1020 12:42:53.022595  275397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:42:53.022650  275397 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:42:53.022697  275397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:42:53.098802  275397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-312375 --name auto-312375 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-312375 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-312375 --network auto-312375 --ip 192.168.85.2 --volume auto-312375:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:42:53.451062  275397 cli_runner.go:164] Run: docker container inspect auto-312375 --format={{.State.Running}}
	I1020 12:42:53.469742  275397 cli_runner.go:164] Run: docker container inspect auto-312375 --format={{.State.Status}}
	I1020 12:42:53.489339  275397 cli_runner.go:164] Run: docker exec auto-312375 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:42:53.542942  275397 oci.go:144] the created container "auto-312375" has a running status.
	I1020 12:42:53.542977  275397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa...
	I1020 12:42:53.795766  275397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:42:53.830719  275397 cli_runner.go:164] Run: docker container inspect auto-312375 --format={{.State.Status}}
	I1020 12:42:53.850366  275397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:42:53.850391  275397 kic_runner.go:114] Args: [docker exec --privileged auto-312375 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:42:53.899964  275397 cli_runner.go:164] Run: docker container inspect auto-312375 --format={{.State.Status}}
	I1020 12:42:53.918733  275397 machine.go:93] provisionDockerMachine start ...
	I1020 12:42:53.918838  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:53.940832  275397 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:53.941085  275397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1020 12:42:53.941105  275397 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:42:54.091624  275397 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-312375
	
	I1020 12:42:54.091653  275397 ubuntu.go:182] provisioning hostname "auto-312375"
	I1020 12:42:54.091712  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:54.112473  275397 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:54.112755  275397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1020 12:42:54.112787  275397 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-312375 && echo "auto-312375" | sudo tee /etc/hostname
	I1020 12:42:54.268130  275397 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-312375
	
	I1020 12:42:54.268235  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:54.287581  275397 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:54.287844  275397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1020 12:42:54.287864  275397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-312375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-312375/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-312375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:42:54.432763  275397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:42:54.432802  275397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:42:54.432840  275397 ubuntu.go:190] setting up certificates
	I1020 12:42:54.432854  275397 provision.go:84] configureAuth start
	I1020 12:42:54.432917  275397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-312375
	I1020 12:42:54.451337  275397 provision.go:143] copyHostCerts
	I1020 12:42:54.451402  275397 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:42:54.451411  275397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:42:54.451484  275397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:42:54.451617  275397 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:42:54.451635  275397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:42:54.451668  275397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:42:54.451803  275397 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:42:54.451820  275397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:42:54.451864  275397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:42:54.451957  275397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.auto-312375 san=[127.0.0.1 192.168.85.2 auto-312375 localhost minikube]
	I1020 12:42:54.627097  275397 provision.go:177] copyRemoteCerts
	I1020 12:42:54.627156  275397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:42:54.627188  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:54.646302  275397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa Username:docker}
	I1020 12:42:54.748658  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:42:54.768921  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1020 12:42:54.787754  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 12:42:54.807438  275397 provision.go:87] duration metric: took 374.571659ms to configureAuth
	I1020 12:42:54.807465  275397 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:42:54.807635  275397 config.go:182] Loaded profile config "auto-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:42:54.807746  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:54.826508  275397 main.go:141] libmachine: Using SSH client type: native
	I1020 12:42:54.826767  275397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1020 12:42:54.826827  275397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:42:55.084383  275397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:42:55.084410  275397 machine.go:96] duration metric: took 1.165655608s to provisionDockerMachine
	I1020 12:42:55.084422  275397 client.go:171] duration metric: took 7.623311506s to LocalClient.Create
	I1020 12:42:55.084445  275397 start.go:167] duration metric: took 7.623376444s to libmachine.API.Create "auto-312375"
	I1020 12:42:55.084454  275397 start.go:293] postStartSetup for "auto-312375" (driver="docker")
	I1020 12:42:55.084464  275397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:42:55.084527  275397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:42:55.084571  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:55.103615  275397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa Username:docker}
	I1020 12:42:55.212998  275397 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:42:55.216892  275397 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:42:55.216931  275397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:42:55.216942  275397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:42:55.216999  275397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:42:55.217111  275397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:42:55.217244  275397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:42:55.225869  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:42:55.246870  275397 start.go:296] duration metric: took 162.400168ms for postStartSetup
	I1020 12:42:55.247260  275397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-312375
	I1020 12:42:55.273996  275397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/config.json ...
	I1020 12:42:55.274369  275397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:42:55.274419  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:55.298917  275397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa Username:docker}
	I1020 12:42:55.398258  275397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:42:55.403226  275397 start.go:128] duration metric: took 7.944624152s to createHost
	I1020 12:42:55.403251  275397 start.go:83] releasing machines lock for "auto-312375", held for 7.944788002s
	I1020 12:42:55.403328  275397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-312375
	I1020 12:42:55.423543  275397 ssh_runner.go:195] Run: cat /version.json
	I1020 12:42:55.423586  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:55.423600  275397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:42:55.423706  275397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-312375
	I1020 12:42:55.444490  275397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa Username:docker}
	I1020 12:42:55.444743  275397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/auto-312375/id_rsa Username:docker}
	I1020 12:42:55.607816  275397 ssh_runner.go:195] Run: systemctl --version
	I1020 12:42:55.615254  275397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:42:55.657553  275397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:42:55.662692  275397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:42:55.662789  275397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:42:55.691215  275397 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:42:55.691238  275397 start.go:495] detecting cgroup driver to use...
	I1020 12:42:55.691274  275397 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:42:55.691317  275397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:42:55.709737  275397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:42:55.722966  275397 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:42:55.723034  275397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:42:55.740530  275397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:42:55.760064  275397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:42:55.852697  275397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:42:55.951113  275397 docker.go:234] disabling docker service ...
	I1020 12:42:55.951182  275397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:42:55.978064  275397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:42:55.998933  275397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:42:56.089581  275397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:42:56.176000  275397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:42:56.188755  275397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:42:56.203208  275397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:42:56.203257  275397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.214480  275397 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:42:56.214550  275397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.224917  275397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.234207  275397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.243716  275397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:42:56.252184  275397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.260985  275397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.275160  275397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:42:56.284301  275397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:42:56.292893  275397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:42:56.303342  275397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:56.390489  275397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:42:56.513213  275397 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:42:56.513314  275397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:42:56.518732  275397 start.go:563] Will wait 60s for crictl version
	I1020 12:42:56.518854  275397 ssh_runner.go:195] Run: which crictl
	I1020 12:42:56.523729  275397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:42:56.557681  275397 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:42:56.557767  275397 ssh_runner.go:195] Run: crio --version
	I1020 12:42:56.597270  275397 ssh_runner.go:195] Run: crio --version
	I1020 12:42:56.643519  275397 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:42:56.645613  275397 cli_runner.go:164] Run: docker network inspect auto-312375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:42:56.671500  275397 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:42:56.677331  275397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:42:56.692422  275397 kubeadm.go:883] updating cluster {Name:auto-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:42:56.692559  275397 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:42:56.692614  275397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:42:56.739663  275397 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:42:56.739692  275397 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:42:56.739749  275397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:42:56.777731  275397 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:42:56.777756  275397 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:42:56.777765  275397 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:42:56.777893  275397 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-312375 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:42:56.777979  275397 ssh_runner.go:195] Run: crio config
	I1020 12:42:56.849738  275397 cni.go:84] Creating CNI manager for ""
	I1020 12:42:56.849797  275397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:42:56.849825  275397 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:42:56.849866  275397 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-312375 NodeName:auto-312375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:42:56.850053  275397 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-312375"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:42:56.850132  275397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:42:56.861360  275397 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:42:56.861432  275397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:42:56.872988  275397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1020 12:42:56.890950  275397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:42:56.914062  275397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1020 12:42:56.932744  275397 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:42:56.938194  275397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:42:56.952373  275397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:42:57.069648  275397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:42:57.093272  275397 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375 for IP: 192.168.85.2
	I1020 12:42:57.093303  275397 certs.go:195] generating shared ca certs ...
	I1020 12:42:57.093329  275397 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:57.093483  275397 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:42:57.093557  275397 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:42:57.093568  275397 certs.go:257] generating profile certs ...
	I1020 12:42:57.093637  275397 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/client.key
	I1020 12:42:57.093658  275397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/client.crt with IP's: []
	I1020 12:42:58.130539  275397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/client.crt ...
	I1020 12:42:58.130576  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/client.crt: {Name:mkeba353a6874330e7b91450b6f34b7c17ea7ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:58.130753  275397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/client.key ...
	I1020 12:42:58.130819  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/client.key: {Name:mk3b961b10e76efe215e86a96d99c84b0c790d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:58.130960  275397 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.key.b95e161e
	I1020 12:42:58.130980  275397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.crt.b95e161e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1020 12:42:58.337930  275397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.crt.b95e161e ...
	I1020 12:42:58.337956  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.crt.b95e161e: {Name:mk25610fd07a8d06f7be0dbfe176040a10df6c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:58.338173  275397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.key.b95e161e ...
	I1020 12:42:58.338187  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.key.b95e161e: {Name:mkf016c0cf7d68f59c3dc3160520219c70ef9157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:58.338266  275397 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.crt.b95e161e -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.crt
	I1020 12:42:58.338337  275397 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.key.b95e161e -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.key
	I1020 12:42:58.338394  275397 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.key
	I1020 12:42:58.338407  275397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.crt with IP's: []
	I1020 12:42:58.467779  275397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.crt ...
	I1020 12:42:58.467819  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.crt: {Name:mk2358f3c186d3899718ec7192c457523bedb2ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:58.467973  275397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.key ...
	I1020 12:42:58.467983  275397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.key: {Name:mkcb6ad4926879554ea1d3777ad56b82604f8097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:42:58.468153  275397 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:42:58.468185  275397 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:42:58.468195  275397 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:42:58.468217  275397 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:42:58.468240  275397 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:42:58.468260  275397 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:42:58.468299  275397 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:42:58.468899  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:42:58.489465  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:42:58.510203  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:42:58.529391  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:42:58.548757  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1020 12:42:58.569922  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1020 12:42:58.590823  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:42:58.612247  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/auto-312375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:42:58.632956  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:42:58.653891  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:42:58.671631  275397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:42:58.689538  275397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:42:58.702863  275397 ssh_runner.go:195] Run: openssl version
	I1020 12:42:58.709573  275397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:42:58.718967  275397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:42:58.723222  275397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:42:58.723281  275397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:42:58.757966  275397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:42:58.768020  275397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:42:58.776919  275397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:58.781405  275397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:58.781471  275397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:42:58.820513  275397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:42:58.834238  275397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:42:58.845420  275397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:42:58.850482  275397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:42:58.850549  275397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:42:58.903037  275397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:42:58.914975  275397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:42:58.919512  275397 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:42:58.919572  275397 kubeadm.go:400] StartCluster: {Name:auto-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:42:58.919659  275397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:42:58.919713  275397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:42:58.954231  275397 cri.go:89] found id: ""
	I1020 12:42:58.954302  275397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:42:58.964698  275397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:42:58.975120  275397 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:42:58.975200  275397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:42:58.985297  275397 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:42:58.985318  275397 kubeadm.go:157] found existing configuration files:
	
	I1020 12:42:58.985374  275397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:42:58.995301  275397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:42:58.995371  275397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:42:59.004862  275397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:42:59.015527  275397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:42:59.015612  275397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:42:59.026633  275397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:42:59.036751  275397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:42:59.036838  275397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:42:59.046739  275397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:42:59.056862  275397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:42:59.056921  275397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:42:59.066409  275397 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:42:59.118271  275397 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:42:59.118373  275397 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:42:59.145563  275397 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:42:59.145666  275397 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:42:59.145722  275397 kubeadm.go:318] OS: Linux
	I1020 12:42:59.145815  275397 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:42:59.145885  275397 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:42:59.145989  275397 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:42:59.146072  275397 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:42:59.146138  275397 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:42:59.146200  275397 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:42:59.146267  275397 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:42:59.146325  275397 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:42:59.228219  275397 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:42:59.228356  275397 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:42:59.228456  275397 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:42:59.238010  275397 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:42:57.471856  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 12:42:57.471939  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:42:57.472004  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:42:57.507911  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:42:57.507935  236655 cri.go:89] found id: "badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a"
	I1020 12:42:57.507942  236655 cri.go:89] found id: ""
	I1020 12:42:57.507951  236655 logs.go:282] 2 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25 badda87b2916e7473a6dbc46c1d3d2c76bd7ee60de1a38a146c19998f94cd72a]
	I1020 12:42:57.508010  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:57.513095  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:57.517931  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:42:57.518003  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:42:57.550170  236655 cri.go:89] found id: ""
	I1020 12:42:57.550203  236655 logs.go:282] 0 containers: []
	W1020 12:42:57.550214  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:42:57.550222  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:42:57.550284  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:42:57.583238  236655 cri.go:89] found id: ""
	I1020 12:42:57.583258  236655 logs.go:282] 0 containers: []
	W1020 12:42:57.583268  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:42:57.583276  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:42:57.583343  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:42:57.613543  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:57.613567  236655 cri.go:89] found id: ""
	I1020 12:42:57.613577  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:42:57.613627  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:57.618116  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:42:57.618186  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:42:57.649361  236655 cri.go:89] found id: ""
	I1020 12:42:57.649388  236655 logs.go:282] 0 containers: []
	W1020 12:42:57.649398  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:42:57.649405  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:42:57.649464  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:42:57.680604  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:42:57.680628  236655 cri.go:89] found id: ""
	I1020 12:42:57.680637  236655 logs.go:282] 1 containers: [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:42:57.680699  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:42:57.685955  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:42:57.686022  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:42:57.719599  236655 cri.go:89] found id: ""
	I1020 12:42:57.719627  236655 logs.go:282] 0 containers: []
	W1020 12:42:57.719638  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:42:57.719646  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:42:57.719706  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:42:57.754132  236655 cri.go:89] found id: ""
	I1020 12:42:57.754159  236655 logs.go:282] 0 containers: []
	W1020 12:42:57.754170  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:42:57.754186  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:42:57.754198  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:42:57.794440  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:42:57.794476  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:42:57.834841  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:42:57.834878  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:42:57.914953  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:42:57.914996  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:42:58.044128  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:42:58.044160  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:42:58.063938  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:42:58.063972  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:42:57.581734  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	W1020 12:42:59.583166  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	I1020 12:42:59.240205  275397 out.go:252]   - Generating certificates and keys ...
	I1020 12:42:59.240317  275397 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:42:59.240431  275397 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:42:59.664671  275397 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:42:59.962000  275397 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:43:00.072528  275397 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:43:00.413400  275397 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:43:00.583068  275397 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:43:00.583244  275397 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-312375 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:43:01.135571  275397 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:43:01.135802  275397 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-312375 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:43:01.442798  275397 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:43:01.700281  275397 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:43:02.102270  275397 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:43:02.102380  275397 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:43:02.233941  275397 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:43:02.515579  275397 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:43:02.726400  275397 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:43:03.245693  275397 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:43:03.660865  275397 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:43:03.661320  275397 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:43:03.665289  275397 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 20 12:42:53 embed-certs-907116 crio[779]: time="2025-10-20T12:42:53.304355559Z" level=info msg="Starting container: add4c22066e620b0220240b5a60089113c1bc3ea613699afc18e54b835e920b5" id=72125160-9627-46a5-b43f-09388c6be8cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:53 embed-certs-907116 crio[779]: time="2025-10-20T12:42:53.309837019Z" level=info msg="Started container" PID=1841 containerID=add4c22066e620b0220240b5a60089113c1bc3ea613699afc18e54b835e920b5 description=kube-system/coredns-66bc5c9577-vpzk5/coredns id=72125160-9627-46a5-b43f-09388c6be8cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=03cc25bdc1d704f7f7758ace64376d7e90253418756db4a9a6017253aee2c0fb
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.941963128Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ffe5a5e2-4d0e-437d-b15f-c5bb49a91d6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.942052751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.946724907Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:90fb08f647c616f4ab8bf4c63aac5f6e40805ec5a9e00a5a90063a3c26249b38 UID:b456bfa2-8544-4ae8-928b-cf120271b15c NetNS:/var/run/netns/53d85095-edb0-4439-a53b-3a07219bf86a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128868}] Aliases:map[]}"
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.946765133Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.957969654Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:90fb08f647c616f4ab8bf4c63aac5f6e40805ec5a9e00a5a90063a3c26249b38 UID:b456bfa2-8544-4ae8-928b-cf120271b15c NetNS:/var/run/netns/53d85095-edb0-4439-a53b-3a07219bf86a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128868}] Aliases:map[]}"
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.958144317Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.959044153Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.960322469Z" level=info msg="Ran pod sandbox 90fb08f647c616f4ab8bf4c63aac5f6e40805ec5a9e00a5a90063a3c26249b38 with infra container: default/busybox/POD" id=ffe5a5e2-4d0e-437d-b15f-c5bb49a91d6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.961679888Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92057aca-c2a4-4845-a1e7-872c1ff06313 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.961868455Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=92057aca-c2a4-4845-a1e7-872c1ff06313 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.961909721Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=92057aca-c2a4-4845-a1e7-872c1ff06313 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.963219816Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=95ea613c-f804-4529-b064-a9f835a9791e name=/runtime.v1.ImageService/PullImage
	Oct 20 12:42:55 embed-certs-907116 crio[779]: time="2025-10-20T12:42:55.966179681Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.364648082Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=95ea613c-f804-4529-b064-a9f835a9791e name=/runtime.v1.ImageService/PullImage
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.365552059Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=23e9a0b6-d712-4492-9a07-90e1744938d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.367801827Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4bcd1fa6-00a5-4576-be05-39862369e28b name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.373785496Z" level=info msg="Creating container: default/busybox/busybox" id=158b2f62-70d6-4fc4-a5a7-ce0f0955550a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.373913228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.378736355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.379405283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.418643368Z" level=info msg="Created container ee3ee22bd5bf514a952b5d8e6116f1db50a31d1e8ad84252220fb544391f1151: default/busybox/busybox" id=158b2f62-70d6-4fc4-a5a7-ce0f0955550a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.419411434Z" level=info msg="Starting container: ee3ee22bd5bf514a952b5d8e6116f1db50a31d1e8ad84252220fb544391f1151" id=5d82d120-eadc-4c77-b52d-9ba947083dc7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:42:57 embed-certs-907116 crio[779]: time="2025-10-20T12:42:57.42183311Z" level=info msg="Started container" PID=1902 containerID=ee3ee22bd5bf514a952b5d8e6116f1db50a31d1e8ad84252220fb544391f1151 description=default/busybox/busybox id=5d82d120-eadc-4c77-b52d-9ba947083dc7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90fb08f647c616f4ab8bf4c63aac5f6e40805ec5a9e00a5a90063a3c26249b38
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ee3ee22bd5bf5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   90fb08f647c61       busybox                                      default
	add4c22066e62       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   03cc25bdc1d70       coredns-66bc5c9577-vpzk5                     kube-system
	775623f566156       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   d10804652b534       storage-provisioner                          kube-system
	8e0e5558f1f95       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   32d4747332cc9       kube-proxy-s2xbv                             kube-system
	0bf4e8f58cbe5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   d66f777d65099       kindnet-24g82                                kube-system
	ef2aa898a829d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   726f4b0ed240c       kube-apiserver-embed-certs-907116            kube-system
	bce6c48dee44e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   dbc329d2b3e8f       etcd-embed-certs-907116                      kube-system
	368130d837377       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   c34cefc6ff77b       kube-controller-manager-embed-certs-907116   kube-system
	7ce06f7ea8c04       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   47bcffecee498       kube-scheduler-embed-certs-907116            kube-system
	
	
	==> coredns [add4c22066e620b0220240b5a60089113c1bc3ea613699afc18e54b835e920b5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44568 - 18900 "HINFO IN 5856585213490752006.2290276040981867635. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021706209s
	
	
	==> describe nodes <==
	Name:               embed-certs-907116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-907116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=embed-certs-907116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-907116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:42:56 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:42:56 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:42:56 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:42:56 +0000   Mon, 20 Oct 2025 12:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-907116
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6a5dfc3b-6ef1-4198-ad94-963e2bd73b87
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-vpzk5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-907116                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-24g82                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-907116             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-907116    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-s2xbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-907116             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-907116 event: Registered Node embed-certs-907116 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-907116 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [bce6c48dee44eb7685ae254d926a58ea34762554245257fb759dbfa84d3a1b9d] <==
	{"level":"warn","ts":"2025-10-20T12:42:32.283334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.293074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.299124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.306536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.313857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.320402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.327178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.333705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.340318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.347511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.353288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.372977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.379315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.385422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:42:32.450814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47882","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:42:52.448729Z","caller":"traceutil/trace.go:172","msg":"trace[1623591967] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:444; }","duration":"125.42471ms","start":"2025-10-20T12:42:52.323275Z","end":"2025-10-20T12:42:52.448699Z","steps":["trace[1623591967] 'read index received'  (duration: 125.414798ms)","trace[1623591967] 'applied index is now lower than readState.Index'  (duration: 8.33µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.580661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.348667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:52.580788Z","caller":"traceutil/trace.go:172","msg":"trace[1030445443] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:432; }","duration":"257.494347ms","start":"2025-10-20T12:42:52.323257Z","end":"2025-10-20T12:42:52.580751Z","steps":["trace[1030445443] 'agreement among raft nodes before linearized reading'  (duration: 125.53814ms)","trace[1030445443] 'range keys from in-memory index tree'  (duration: 131.781251ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.581452Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.112372ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356087936677149 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-vpzk5\" mod_revision:381 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-vpzk5\" value_size:4084 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-vpzk5\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:52.581542Z","caller":"traceutil/trace.go:172","msg":"trace[1888332787] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"278.798771ms","start":"2025-10-20T12:42:52.302726Z","end":"2025-10-20T12:42:52.581525Z","steps":["trace[1888332787] 'process raft request'  (duration: 146.026898ms)","trace[1888332787] 'compare'  (duration: 131.815798ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.605027Z","caller":"traceutil/trace.go:172","msg":"trace[329180323] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"295.476898ms","start":"2025-10-20T12:42:52.309500Z","end":"2025-10-20T12:42:52.604977Z","steps":["trace[329180323] 'process raft request'  (duration: 295.344792ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:52.987151Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.300396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:52.987217Z","caller":"traceutil/trace.go:172","msg":"trace[1321906412] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:437; }","duration":"102.382076ms","start":"2025-10-20T12:42:52.884821Z","end":"2025-10-20T12:42:52.987203Z","steps":["trace[1321906412] 'range keys from in-memory index tree'  (duration: 102.229534ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:52.987329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.747408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-20T12:42:52.987380Z","caller":"traceutil/trace.go:172","msg":"trace[2135136369] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:437; }","duration":"181.808163ms","start":"2025-10-20T12:42:52.805562Z","end":"2025-10-20T12:42:52.987370Z","steps":["trace[2135136369] 'range keys from in-memory index tree'  (duration: 181.619715ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:43:06 up  1:25,  0 user,  load average: 3.86, 3.44, 2.23
	Linux embed-certs-907116 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bf4e8f58cbe5e0efdd386ba48b4f57af08d9e4646a9aa58ca8e964e3772b3e1] <==
	I1020 12:42:41.657179       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:42:41.657966       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 12:42:41.658128       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:42:41.658149       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:42:41.658177       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:42:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:42:41.861966       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:42:41.862033       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:42:41.862049       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:42:41.862266       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:42:42.363184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:42:42.363231       1 metrics.go:72] Registering metrics
	I1020 12:42:42.363314       1 controller.go:711] "Syncing nftables rules"
	I1020 12:42:51.864845       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:42:51.864929       1 main.go:301] handling current node
	I1020 12:43:01.862469       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:43:01.862503       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ef2aa898a829d2e0b7babdc7de70753933e999d9d23b11d3ec6877e5e11cd34a] <==
	I1020 12:42:33.005488       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:42:33.005494       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:42:33.010376       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 12:42:33.015919       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:42:33.019285       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:33.022128       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:33.023301       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:42:33.909387       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 12:42:33.913523       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 12:42:33.913543       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:42:34.450048       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:42:34.485372       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:42:34.615497       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 12:42:34.621720       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1020 12:42:34.623139       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:42:34.627895       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:42:35.032085       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:42:35.718314       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:42:35.734409       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 12:42:35.743505       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:42:40.486163       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:40.491314       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 12:42:40.945794       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1020 12:42:41.137539       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1020 12:43:04.727626       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:46788: use of closed network connection
	
	
	==> kube-controller-manager [368130d8373775a3d1925363e6ad5bc60b0e63b2548c73323a2fd9bd693866ca] <==
	I1020 12:42:40.030065       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:42:40.030077       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:42:40.030077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:42:40.030114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 12:42:40.030228       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:42:40.030284       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:42:40.030337       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-907116"
	I1020 12:42:40.030404       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 12:42:40.030673       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:42:40.030710       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 12:42:40.030751       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:42:40.030756       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:42:40.030803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:42:40.030949       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:42:40.031502       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:42:40.031534       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:42:40.031550       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:42:40.032719       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 12:42:40.032812       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:42:40.034740       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:40.037810       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:40.042048       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 12:42:40.051546       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:42:40.051600       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:42:55.032221       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8e0e5558f1f958a5bf937a3d965f1256d14e2b40ce0cdf9688997a5c6733d09d] <==
	I1020 12:42:41.429421       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:42:41.498315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:42:41.598874       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:42:41.598935       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 12:42:41.599042       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:42:41.621891       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:42:41.621985       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:42:41.628043       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:42:41.628526       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:42:41.628569       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:41.630552       1 config.go:309] "Starting node config controller"
	I1020 12:42:41.630637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:42:41.630651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:42:41.630709       1 config.go:200] "Starting service config controller"
	I1020 12:42:41.631020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:42:41.630885       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:42:41.631070       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:42:41.630844       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:42:41.631092       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:42:41.731219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:42:41.731282       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:42:41.731302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7ce06f7ea8c0453f7b6b26ce7c93f836322c6fede4eed702eefc8cb1d1884067] <==
	E1020 12:42:33.094672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:42:33.094908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:42:33.094985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:42:33.094985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:42:33.095025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:42:33.095542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:42:33.095554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:42:33.095584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:42:33.095703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:42:33.095739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:42:33.095795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:42:33.095801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:42:33.095795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:42:33.934677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:42:33.965881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:42:33.968945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1020 12:42:33.971992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:42:34.006654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:42:34.050887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:42:34.078576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:42:34.179896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:42:34.222453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:42:34.259673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:42:34.273403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1020 12:42:35.891546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:42:36 embed-certs-907116 kubelet[1317]: I1020 12:42:36.659533    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-907116" podStartSLOduration=1.6595117419999998 podStartE2EDuration="1.659511742s" podCreationTimestamp="2025-10-20 12:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:36.65899846 +0000 UTC m=+1.167628077" watchObservedRunningTime="2025-10-20 12:42:36.659511742 +0000 UTC m=+1.168141322"
	Oct 20 12:42:36 embed-certs-907116 kubelet[1317]: I1020 12:42:36.673822    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-907116" podStartSLOduration=1.6737974740000001 podStartE2EDuration="1.673797474s" podCreationTimestamp="2025-10-20 12:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:36.673385422 +0000 UTC m=+1.182015009" watchObservedRunningTime="2025-10-20 12:42:36.673797474 +0000 UTC m=+1.182427063"
	Oct 20 12:42:36 embed-certs-907116 kubelet[1317]: I1020 12:42:36.689712    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-907116" podStartSLOduration=1.689687098 podStartE2EDuration="1.689687098s" podCreationTimestamp="2025-10-20 12:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:36.688466383 +0000 UTC m=+1.197095968" watchObservedRunningTime="2025-10-20 12:42:36.689687098 +0000 UTC m=+1.198316679"
	Oct 20 12:42:36 embed-certs-907116 kubelet[1317]: I1020 12:42:36.740672    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-907116" podStartSLOduration=1.740649231 podStartE2EDuration="1.740649231s" podCreationTimestamp="2025-10-20 12:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:36.716576229 +0000 UTC m=+1.225205829" watchObservedRunningTime="2025-10-20 12:42:36.740649231 +0000 UTC m=+1.249278821"
	Oct 20 12:42:40 embed-certs-907116 kubelet[1317]: I1020 12:42:40.046729    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 20 12:42:40 embed-certs-907116 kubelet[1317]: I1020 12:42:40.047467    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.014689    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f01f5d2c-f20c-42ea-a933-b6d15ea40244-lib-modules\") pod \"kube-proxy-s2xbv\" (UID: \"f01f5d2c-f20c-42ea-a933-b6d15ea40244\") " pod="kube-system/kube-proxy-s2xbv"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015011    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5qc\" (UniqueName: \"kubernetes.io/projected/f01f5d2c-f20c-42ea-a933-b6d15ea40244-kube-api-access-ps5qc\") pod \"kube-proxy-s2xbv\" (UID: \"f01f5d2c-f20c-42ea-a933-b6d15ea40244\") " pod="kube-system/kube-proxy-s2xbv"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015165    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/86b2fc3f-2d40-4a2d-9068-75b0a952b958-cni-cfg\") pod \"kindnet-24g82\" (UID: \"86b2fc3f-2d40-4a2d-9068-75b0a952b958\") " pod="kube-system/kindnet-24g82"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015308    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86b2fc3f-2d40-4a2d-9068-75b0a952b958-xtables-lock\") pod \"kindnet-24g82\" (UID: \"86b2fc3f-2d40-4a2d-9068-75b0a952b958\") " pod="kube-system/kindnet-24g82"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015337    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f01f5d2c-f20c-42ea-a933-b6d15ea40244-xtables-lock\") pod \"kube-proxy-s2xbv\" (UID: \"f01f5d2c-f20c-42ea-a933-b6d15ea40244\") " pod="kube-system/kube-proxy-s2xbv"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015516    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86b2fc3f-2d40-4a2d-9068-75b0a952b958-lib-modules\") pod \"kindnet-24g82\" (UID: \"86b2fc3f-2d40-4a2d-9068-75b0a952b958\") " pod="kube-system/kindnet-24g82"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015568    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d255\" (UniqueName: \"kubernetes.io/projected/86b2fc3f-2d40-4a2d-9068-75b0a952b958-kube-api-access-6d255\") pod \"kindnet-24g82\" (UID: \"86b2fc3f-2d40-4a2d-9068-75b0a952b958\") " pod="kube-system/kindnet-24g82"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.015596    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f01f5d2c-f20c-42ea-a933-b6d15ea40244-kube-proxy\") pod \"kube-proxy-s2xbv\" (UID: \"f01f5d2c-f20c-42ea-a933-b6d15ea40244\") " pod="kube-system/kube-proxy-s2xbv"
	Oct 20 12:42:41 embed-certs-907116 kubelet[1317]: I1020 12:42:41.658501    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-24g82" podStartSLOduration=1.658475731 podStartE2EDuration="1.658475731s" podCreationTimestamp="2025-10-20 12:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:41.657829577 +0000 UTC m=+6.166459159" watchObservedRunningTime="2025-10-20 12:42:41.658475731 +0000 UTC m=+6.167105318"
	Oct 20 12:42:42 embed-certs-907116 kubelet[1317]: I1020 12:42:42.542727    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s2xbv" podStartSLOduration=2.542246922 podStartE2EDuration="2.542246922s" podCreationTimestamp="2025-10-20 12:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:41.673844406 +0000 UTC m=+6.182473996" watchObservedRunningTime="2025-10-20 12:42:42.542246922 +0000 UTC m=+7.050876509"
	Oct 20 12:42:52 embed-certs-907116 kubelet[1317]: I1020 12:42:52.232657    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 20 12:42:52 embed-certs-907116 kubelet[1317]: I1020 12:42:52.601495    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lc5\" (UniqueName: \"kubernetes.io/projected/83ece3ef-33e2-4353-9230-6bdd8c7320c0-kube-api-access-p4lc5\") pod \"storage-provisioner\" (UID: \"83ece3ef-33e2-4353-9230-6bdd8c7320c0\") " pod="kube-system/storage-provisioner"
	Oct 20 12:42:52 embed-certs-907116 kubelet[1317]: I1020 12:42:52.601553    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83ece3ef-33e2-4353-9230-6bdd8c7320c0-tmp\") pod \"storage-provisioner\" (UID: \"83ece3ef-33e2-4353-9230-6bdd8c7320c0\") " pod="kube-system/storage-provisioner"
	Oct 20 12:42:52 embed-certs-907116 kubelet[1317]: I1020 12:42:52.702566    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7422dd44-eb83-44f9-8711-41a74794dfed-config-volume\") pod \"coredns-66bc5c9577-vpzk5\" (UID: \"7422dd44-eb83-44f9-8711-41a74794dfed\") " pod="kube-system/coredns-66bc5c9577-vpzk5"
	Oct 20 12:42:52 embed-certs-907116 kubelet[1317]: I1020 12:42:52.702631    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbhd\" (UniqueName: \"kubernetes.io/projected/7422dd44-eb83-44f9-8711-41a74794dfed-kube-api-access-hzbhd\") pod \"coredns-66bc5c9577-vpzk5\" (UID: \"7422dd44-eb83-44f9-8711-41a74794dfed\") " pod="kube-system/coredns-66bc5c9577-vpzk5"
	Oct 20 12:42:53 embed-certs-907116 kubelet[1317]: I1020 12:42:53.695652    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vpzk5" podStartSLOduration=12.695630489 podStartE2EDuration="12.695630489s" podCreationTimestamp="2025-10-20 12:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:53.692841882 +0000 UTC m=+18.201471486" watchObservedRunningTime="2025-10-20 12:42:53.695630489 +0000 UTC m=+18.204260091"
	Oct 20 12:42:53 embed-certs-907116 kubelet[1317]: I1020 12:42:53.708671    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.708647874 podStartE2EDuration="12.708647874s" podCreationTimestamp="2025-10-20 12:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 12:42:53.708417484 +0000 UTC m=+18.217047072" watchObservedRunningTime="2025-10-20 12:42:53.708647874 +0000 UTC m=+18.217277464"
	Oct 20 12:42:55 embed-certs-907116 kubelet[1317]: I1020 12:42:55.722039    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kdv\" (UniqueName: \"kubernetes.io/projected/b456bfa2-8544-4ae8-928b-cf120271b15c-kube-api-access-j5kdv\") pod \"busybox\" (UID: \"b456bfa2-8544-4ae8-928b-cf120271b15c\") " pod="default/busybox"
	Oct 20 12:42:57 embed-certs-907116 kubelet[1317]: I1020 12:42:57.705457    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.300783851 podStartE2EDuration="2.705432866s" podCreationTimestamp="2025-10-20 12:42:55 +0000 UTC" firstStartedPulling="2025-10-20 12:42:55.962251164 +0000 UTC m=+20.470880741" lastFinishedPulling="2025-10-20 12:42:57.366900178 +0000 UTC m=+21.875529756" observedRunningTime="2025-10-20 12:42:57.705199059 +0000 UTC m=+22.213828645" watchObservedRunningTime="2025-10-20 12:42:57.705432866 +0000 UTC m=+22.214062452"
	
	
	==> storage-provisioner [775623f5661563299f9c735365cbe6db3dbb5af74e3b9f269fcd6f4868afac99] <==
	I1020 12:42:53.069410       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:42:53.081330       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:42:53.081387       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:42:53.084397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:53.092349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:42:53.092535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:42:53.092736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-907116_3d287b6d-c59f-4a10-93fe-c16796bdb6be!
	I1020 12:42:53.093096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e684f2b7-228c-4e12-97d9-985f6618132e", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-907116_3d287b6d-c59f-4a10-93fe-c16796bdb6be became leader
	W1020 12:42:53.096048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:53.100072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:42:53.193917       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-907116_3d287b6d-c59f-4a10-93fe-c16796bdb6be!
	W1020 12:42:55.103158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:55.109527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:57.114183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:57.120089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:59.123660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:42:59.128971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:01.132986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:01.168511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:03.171974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:03.180219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:05.183287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:05.187552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-907116 -n embed-certs-907116
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-907116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-874012 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-874012 --alsologtostderr -v=1: exit status 80 (2.036521894s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-874012 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:43:38.777578  285718 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:43:38.777891  285718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:38.777904  285718 out.go:374] Setting ErrFile to fd 2...
	I1020 12:43:38.777909  285718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:38.778198  285718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:43:38.778516  285718 out.go:368] Setting JSON to false
	I1020 12:43:38.778591  285718 mustload.go:65] Loading cluster: default-k8s-diff-port-874012
	I1020 12:43:38.780520  285718 config.go:182] Loaded profile config "default-k8s-diff-port-874012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:38.781251  285718 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-874012 --format={{.State.Status}}
	I1020 12:43:38.803692  285718 host.go:66] Checking if "default-k8s-diff-port-874012" exists ...
	I1020 12:43:38.804007  285718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:43:38.873218  285718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-20 12:43:38.86176519 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:43:38.874041  285718 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-874012 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 12:43:38.876179  285718 out.go:179] * Pausing node default-k8s-diff-port-874012 ... 
	I1020 12:43:38.877875  285718 host.go:66] Checking if "default-k8s-diff-port-874012" exists ...
	I1020 12:43:38.878248  285718 ssh_runner.go:195] Run: systemctl --version
	I1020 12:43:38.878308  285718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-874012
	I1020 12:43:38.898172  285718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/default-k8s-diff-port-874012/id_rsa Username:docker}
	I1020 12:43:39.002789  285718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:39.020874  285718 pause.go:52] kubelet running: true
	I1020 12:43:39.020952  285718 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:43:39.255805  285718 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:43:39.255907  285718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:43:39.343577  285718 cri.go:89] found id: "fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6"
	I1020 12:43:39.343601  285718 cri.go:89] found id: "e03f2f95e6c14702b90f8c7799cdb5513504049e5e68dc0d01aace1a70f8e115"
	I1020 12:43:39.343608  285718 cri.go:89] found id: "96ed2fb71faeca4bae41804a971903dfe647f4945e3ac5a8e2c2c362359f0919"
	I1020 12:43:39.343613  285718 cri.go:89] found id: "7866a55261bf64a5c5e00ff9934f5375450ec837c58b9e9ea122dbc5064839b2"
	I1020 12:43:39.343617  285718 cri.go:89] found id: "949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f"
	I1020 12:43:39.343622  285718 cri.go:89] found id: "950cf2bcf663da8ddc81ce889407cc48e3d12e5e1bd9be508b2b13a09017120c"
	I1020 12:43:39.343626  285718 cri.go:89] found id: "361bbce2ef1dab79033c19296471736ded91254dc81373034fb69f4e8ab8a98c"
	I1020 12:43:39.343630  285718 cri.go:89] found id: "4701f0f003c887f114d5da2a88fc8b6767f57ea38df31b2ec658e6f9e2ca07df"
	I1020 12:43:39.343634  285718 cri.go:89] found id: "7c78acc071dce4799d081c9cd84fb7f3990161652fd814c617b6d088840d020a"
	I1020 12:43:39.343653  285718 cri.go:89] found id: "52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487"
	I1020 12:43:39.343659  285718 cri.go:89] found id: "997f5fb70cf17401f9f118f22b72542195a6fa932ca73033e3cb05b2879ccce7"
	I1020 12:43:39.343663  285718 cri.go:89] found id: ""
	I1020 12:43:39.343707  285718 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:43:39.359531  285718 retry.go:31] will retry after 312.655166ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:39Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:43:39.673003  285718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:39.689892  285718 pause.go:52] kubelet running: false
	I1020 12:43:39.689955  285718 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:43:39.889639  285718 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:43:39.889735  285718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:43:39.977146  285718 cri.go:89] found id: "fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6"
	I1020 12:43:39.977179  285718 cri.go:89] found id: "e03f2f95e6c14702b90f8c7799cdb5513504049e5e68dc0d01aace1a70f8e115"
	I1020 12:43:39.977184  285718 cri.go:89] found id: "96ed2fb71faeca4bae41804a971903dfe647f4945e3ac5a8e2c2c362359f0919"
	I1020 12:43:39.977190  285718 cri.go:89] found id: "7866a55261bf64a5c5e00ff9934f5375450ec837c58b9e9ea122dbc5064839b2"
	I1020 12:43:39.977195  285718 cri.go:89] found id: "949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f"
	I1020 12:43:39.977199  285718 cri.go:89] found id: "950cf2bcf663da8ddc81ce889407cc48e3d12e5e1bd9be508b2b13a09017120c"
	I1020 12:43:39.977203  285718 cri.go:89] found id: "361bbce2ef1dab79033c19296471736ded91254dc81373034fb69f4e8ab8a98c"
	I1020 12:43:39.977207  285718 cri.go:89] found id: "4701f0f003c887f114d5da2a88fc8b6767f57ea38df31b2ec658e6f9e2ca07df"
	I1020 12:43:39.977212  285718 cri.go:89] found id: "7c78acc071dce4799d081c9cd84fb7f3990161652fd814c617b6d088840d020a"
	I1020 12:43:39.977219  285718 cri.go:89] found id: "52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487"
	I1020 12:43:39.977222  285718 cri.go:89] found id: "997f5fb70cf17401f9f118f22b72542195a6fa932ca73033e3cb05b2879ccce7"
	I1020 12:43:39.977226  285718 cri.go:89] found id: ""
	I1020 12:43:39.977269  285718 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:43:39.992916  285718 retry.go:31] will retry after 424.847394ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:39Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:43:40.418679  285718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:40.436402  285718 pause.go:52] kubelet running: false
	I1020 12:43:40.436477  285718 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:43:40.645978  285718 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:43:40.646060  285718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:43:40.731588  285718 cri.go:89] found id: "fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6"
	I1020 12:43:40.731614  285718 cri.go:89] found id: "e03f2f95e6c14702b90f8c7799cdb5513504049e5e68dc0d01aace1a70f8e115"
	I1020 12:43:40.731620  285718 cri.go:89] found id: "96ed2fb71faeca4bae41804a971903dfe647f4945e3ac5a8e2c2c362359f0919"
	I1020 12:43:40.731626  285718 cri.go:89] found id: "7866a55261bf64a5c5e00ff9934f5375450ec837c58b9e9ea122dbc5064839b2"
	I1020 12:43:40.731631  285718 cri.go:89] found id: "949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f"
	I1020 12:43:40.731636  285718 cri.go:89] found id: "950cf2bcf663da8ddc81ce889407cc48e3d12e5e1bd9be508b2b13a09017120c"
	I1020 12:43:40.731640  285718 cri.go:89] found id: "361bbce2ef1dab79033c19296471736ded91254dc81373034fb69f4e8ab8a98c"
	I1020 12:43:40.731644  285718 cri.go:89] found id: "4701f0f003c887f114d5da2a88fc8b6767f57ea38df31b2ec658e6f9e2ca07df"
	I1020 12:43:40.731649  285718 cri.go:89] found id: "7c78acc071dce4799d081c9cd84fb7f3990161652fd814c617b6d088840d020a"
	I1020 12:43:40.731656  285718 cri.go:89] found id: "52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487"
	I1020 12:43:40.731661  285718 cri.go:89] found id: "997f5fb70cf17401f9f118f22b72542195a6fa932ca73033e3cb05b2879ccce7"
	I1020 12:43:40.731666  285718 cri.go:89] found id: ""
	I1020 12:43:40.731711  285718 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:43:40.749235  285718 out.go:203] 
	W1020 12:43:40.750767  285718 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:43:40.750814  285718 out.go:285] * 
	* 
	W1020 12:43:40.757041  285718 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:43:40.759904  285718 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-874012 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-874012
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-874012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7",
	        "Created": "2025-10-20T12:41:38.524846166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:42:41.663509299Z",
	            "FinishedAt": "2025-10-20T12:42:40.613725086Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/hosts",
	        "LogPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7-json.log",
	        "Name": "/default-k8s-diff-port-874012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-874012:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-874012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7",
	                "LowerDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-874012",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-874012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-874012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-874012",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-874012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6963550a8b83a3161c6af9b71432f46dac540327d6a58054f3fd22889d90e2c0",
	            "SandboxKey": "/var/run/docker/netns/6963550a8b83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-874012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:ca:e1:21:0e:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "071054924bdb32d774c4d0c0f3c167909dde1b983fbdc59f24f908b03d171adf",
	                    "EndpointID": "bade128faf5d2063cbd63ac376020bf9b21a6d2a73466d75d4d193e39ba48bcc",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-874012",
	                        "fbc9ff1c79c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012: exit status 2 (385.013044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-874012 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-874012 logs -n 25: (1.615237834s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-874012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ stop    │ -p newest-cni-916479 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-916479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ newest-cni-916479 image list --format=json                                                                                                                                                                                                    │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ pause   │ -p newest-cni-916479 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-874012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:43 UTC │
	│ delete  │ -p newest-cni-916479                                                                                                                                                                                                                          │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p newest-cni-916479                                                                                                                                                                                                                          │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p auto-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-312375                  │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-907116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ stop    │ -p embed-certs-907116 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-907116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 pgrep -a kubelet                                                                                                                                                                                                               │ auto-312375                  │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ image   │ default-k8s-diff-port-874012 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ pause   │ -p default-k8s-diff-port-874012 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:43:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:43:23.706101  282174 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:43:23.706205  282174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:23.706212  282174 out.go:374] Setting ErrFile to fd 2...
	I1020 12:43:23.706225  282174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:23.706449  282174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:43:23.706935  282174 out.go:368] Setting JSON to false
	I1020 12:43:23.708227  282174 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5153,"bootTime":1760959051,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:43:23.708330  282174 start.go:141] virtualization: kvm guest
	I1020 12:43:23.710747  282174 out.go:179] * [embed-certs-907116] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:43:23.712519  282174 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:43:23.712541  282174 notify.go:220] Checking for updates...
	I1020 12:43:23.715514  282174 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:43:23.717095  282174 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:43:23.718463  282174 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:43:23.719947  282174 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:43:23.721420  282174 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:43:23.723309  282174 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:23.723838  282174 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:43:23.749724  282174 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:43:23.749840  282174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:43:23.809620  282174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:43:23.798648685 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:43:23.809728  282174 docker.go:318] overlay module found
	I1020 12:43:23.811599  282174 out.go:179] * Using the docker driver based on existing profile
	I1020 12:43:23.812865  282174 start.go:305] selected driver: docker
	I1020 12:43:23.812883  282174 start.go:925] validating driver "docker" against &{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:43:23.812962  282174 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:43:23.813549  282174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:43:23.870075  282174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:43:23.860331312 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:43:23.870333  282174 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:43:23.870359  282174 cni.go:84] Creating CNI manager for ""
	I1020 12:43:23.870404  282174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:43:23.870437  282174 start.go:349] cluster config:
	{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:43:23.872452  282174 out.go:179] * Starting "embed-certs-907116" primary control-plane node in "embed-certs-907116" cluster
	I1020 12:43:23.873588  282174 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:43:23.874910  282174 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:43:23.876267  282174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:43:23.876315  282174 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:43:23.876318  282174 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:43:23.876421  282174 cache.go:58] Caching tarball of preloaded images
	I1020 12:43:23.876499  282174 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:43:23.876510  282174 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:43:23.876607  282174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json ...
	I1020 12:43:23.897722  282174 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:43:23.897741  282174 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:43:23.897757  282174 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:43:23.897810  282174 start.go:360] acquireMachinesLock for embed-certs-907116: {Name:mk081262f5d599396d0c232c9311858444bc2e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:43:23.897878  282174 start.go:364] duration metric: took 38.1µs to acquireMachinesLock for "embed-certs-907116"
	I1020 12:43:23.897896  282174 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:43:23.897901  282174 fix.go:54] fixHost starting: 
	I1020 12:43:23.898095  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:23.917316  282174 fix.go:112] recreateIfNeeded on embed-certs-907116: state=Stopped err=<nil>
	W1020 12:43:23.917345  282174 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:43:21.826902  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:21.827348  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:21.827396  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:21.827449  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:21.857399  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:21.857416  236655 cri.go:89] found id: ""
	I1020 12:43:21.857424  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:21.857473  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:21.861487  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:21.861549  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:21.888950  236655 cri.go:89] found id: ""
	I1020 12:43:21.888975  236655 logs.go:282] 0 containers: []
	W1020 12:43:21.888985  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:21.888991  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:21.889102  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:21.916702  236655 cri.go:89] found id: ""
	I1020 12:43:21.916730  236655 logs.go:282] 0 containers: []
	W1020 12:43:21.916740  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:21.916746  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:21.916813  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:21.946607  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:21.946633  236655 cri.go:89] found id: ""
	I1020 12:43:21.946643  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:21.946702  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:21.951545  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:21.951616  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:21.980724  236655 cri.go:89] found id: ""
	I1020 12:43:21.980746  236655 logs.go:282] 0 containers: []
	W1020 12:43:21.980754  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:21.980760  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:21.980832  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:22.007635  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:22.007658  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:43:22.007663  236655 cri.go:89] found id: ""
	I1020 12:43:22.007672  236655 logs.go:282] 2 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:43:22.007732  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:22.011969  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:22.016043  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:22.016113  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:22.045288  236655 cri.go:89] found id: ""
	I1020 12:43:22.045319  236655 logs.go:282] 0 containers: []
	W1020 12:43:22.045330  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:22.045348  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:22.045403  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:22.075166  236655 cri.go:89] found id: ""
	I1020 12:43:22.075194  236655 logs.go:282] 0 containers: []
	W1020 12:43:22.075201  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:22.075216  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:22.075227  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:22.107132  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:22.107157  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:22.196060  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:22.196098  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:22.254612  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:22.254632  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:22.254646  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:22.289682  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:22.289716  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:22.343109  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:22.343142  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:22.372250  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:43:22.372282  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:43:22.400377  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:22.400405  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:22.415787  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:22.415811  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:24.972831  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:24.973333  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:24.973384  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:24.973439  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:25.001992  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:25.002017  236655 cri.go:89] found id: ""
	I1020 12:43:25.002027  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:25.002096  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:25.006734  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:25.006815  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:25.033907  236655 cri.go:89] found id: ""
	I1020 12:43:25.033939  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.033950  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:25.033957  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:25.034024  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:25.062007  236655 cri.go:89] found id: ""
	I1020 12:43:25.062031  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.062045  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:25.062050  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:25.062109  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:25.090680  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:25.090699  236655 cri.go:89] found id: ""
	I1020 12:43:25.090708  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:25.090766  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:25.095189  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:25.095259  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:25.122855  236655 cri.go:89] found id: ""
	I1020 12:43:25.122881  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.122888  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:25.122894  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:25.122950  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:25.150747  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:25.150786  236655 cri.go:89] found id: ""
	I1020 12:43:25.150796  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:25.150855  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:25.154809  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:25.154876  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:25.181663  236655 cri.go:89] found id: ""
	I1020 12:43:25.181689  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.181697  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:25.181703  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:25.181758  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:25.208702  236655 cri.go:89] found id: ""
	I1020 12:43:25.208735  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.208746  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:25.208757  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:25.208797  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:25.236136  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:25.236165  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:25.294014  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:25.294056  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:25.324895  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:25.324922  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:25.428345  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:25.428377  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:25.444408  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:25.444438  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:25.503440  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:25.503462  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:25.503479  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:25.541399  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:25.541432  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	W1020 12:43:21.581796  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	W1020 12:43:24.081803  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	I1020 12:43:25.580597  272557 pod_ready.go:94] pod "coredns-66bc5c9577-vd5sd" is "Ready"
	I1020 12:43:25.580625  272557 pod_ready.go:86] duration metric: took 32.005357365s for pod "coredns-66bc5c9577-vd5sd" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.583091  272557 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.587013  272557 pod_ready.go:94] pod "etcd-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:25.587034  272557 pod_ready.go:86] duration metric: took 3.918216ms for pod "etcd-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.588790  272557 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.592449  272557 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:25.592483  272557 pod_ready.go:86] duration metric: took 3.662358ms for pod "kube-apiserver-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.594352  272557 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.779237  272557 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:25.779266  272557 pod_ready.go:86] duration metric: took 184.894574ms for pod "kube-controller-manager-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.978391  272557 pod_ready.go:83] waiting for pod "kube-proxy-bbw6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.378449  272557 pod_ready.go:94] pod "kube-proxy-bbw6k" is "Ready"
	I1020 12:43:26.378476  272557 pod_ready.go:86] duration metric: took 400.059178ms for pod "kube-proxy-bbw6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.578871  272557 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.978767  272557 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:26.978825  272557 pod_ready.go:86] duration metric: took 399.922336ms for pod "kube-scheduler-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.978838  272557 pod_ready.go:40] duration metric: took 33.407934682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:27.027516  272557 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:43:27.029988  272557 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-874012" cluster and "default" namespace by default
	W1020 12:43:23.896589  275397 node_ready.go:57] node "auto-312375" has "Ready":"False" status (will retry)
	W1020 12:43:26.396467  275397 node_ready.go:57] node "auto-312375" has "Ready":"False" status (will retry)
	I1020 12:43:26.896574  275397 node_ready.go:49] node "auto-312375" is "Ready"
	I1020 12:43:26.896613  275397 node_ready.go:38] duration metric: took 11.503638268s for node "auto-312375" to be "Ready" ...
	I1020 12:43:26.896632  275397 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:43:26.896700  275397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:43:26.910084  275397 api_server.go:72] duration metric: took 11.798943592s to wait for apiserver process to appear ...
	I1020 12:43:26.910117  275397 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:43:26.910157  275397 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:43:26.915069  275397 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:43:26.916040  275397 api_server.go:141] control plane version: v1.34.1
	I1020 12:43:26.916067  275397 api_server.go:131] duration metric: took 5.942528ms to wait for apiserver health ...
	I1020 12:43:26.916077  275397 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:43:26.921411  275397 system_pods.go:59] 8 kube-system pods found
	I1020 12:43:26.921454  275397 system_pods.go:61] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:26.921469  275397 system_pods.go:61] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:26.921477  275397 system_pods.go:61] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:26.921491  275397 system_pods.go:61] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:26.921501  275397 system_pods.go:61] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:26.921506  275397 system_pods.go:61] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:26.921519  275397 system_pods.go:61] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:26.921526  275397 system_pods.go:61] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:43:26.921538  275397 system_pods.go:74] duration metric: took 5.453931ms to wait for pod list to return data ...
	I1020 12:43:26.921548  275397 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:43:26.924912  275397 default_sa.go:45] found service account: "default"
	I1020 12:43:26.924937  275397 default_sa.go:55] duration metric: took 3.383041ms for default service account to be created ...
	I1020 12:43:26.924947  275397 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:43:27.021004  275397 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:27.021061  275397 system_pods.go:89] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:27.021069  275397 system_pods.go:89] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:27.021076  275397 system_pods.go:89] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:27.021081  275397 system_pods.go:89] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:27.021087  275397 system_pods.go:89] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:27.021093  275397 system_pods.go:89] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:27.021099  275397 system_pods.go:89] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:27.021107  275397 system_pods.go:89] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:43:27.021136  275397 retry.go:31] will retry after 293.826364ms: missing components: kube-dns
	I1020 12:43:23.919270  282174 out.go:252] * Restarting existing docker container for "embed-certs-907116" ...
	I1020 12:43:23.919343  282174 cli_runner.go:164] Run: docker start embed-certs-907116
	I1020 12:43:24.172969  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:24.191198  282174 kic.go:430] container "embed-certs-907116" state is running.
	I1020 12:43:24.191657  282174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-907116
	I1020 12:43:24.210842  282174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json ...
	I1020 12:43:24.211062  282174 machine.go:93] provisionDockerMachine start ...
	I1020 12:43:24.211122  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:24.229699  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:24.229966  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:24.229983  282174 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:43:24.230631  282174 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44900->127.0.0.1:33103: read: connection reset by peer
	I1020 12:43:27.378887  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-907116
	
	I1020 12:43:27.378916  282174 ubuntu.go:182] provisioning hostname "embed-certs-907116"
	I1020 12:43:27.378984  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:27.397329  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:27.397559  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:27.397573  282174 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-907116 && echo "embed-certs-907116" | sudo tee /etc/hostname
	I1020 12:43:27.550953  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-907116
	
	I1020 12:43:27.551037  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:27.570194  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:27.570489  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:27.570514  282174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-907116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-907116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-907116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:43:27.715553  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:43:27.715583  282174 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:43:27.715619  282174 ubuntu.go:190] setting up certificates
	I1020 12:43:27.715629  282174 provision.go:84] configureAuth start
	I1020 12:43:27.715687  282174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-907116
	I1020 12:43:27.733741  282174 provision.go:143] copyHostCerts
	I1020 12:43:27.733829  282174 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:43:27.733849  282174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:43:27.733927  282174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:43:27.734020  282174 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:43:27.734035  282174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:43:27.734066  282174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:43:27.734171  282174 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:43:27.734183  282174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:43:27.734208  282174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:43:27.734256  282174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.embed-certs-907116 san=[127.0.0.1 192.168.76.2 embed-certs-907116 localhost minikube]
	I1020 12:43:27.811854  282174 provision.go:177] copyRemoteCerts
	I1020 12:43:27.811921  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:43:27.811961  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:27.830550  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:27.932830  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:43:27.951988  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:43:27.970519  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 12:43:27.988179  282174 provision.go:87] duration metric: took 272.535074ms to configureAuth
	I1020 12:43:27.988209  282174 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:43:27.988396  282174 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:27.988502  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.008448  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:28.008782  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:28.008808  282174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:43:28.325424  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:43:28.325453  282174 machine.go:96] duration metric: took 4.114377236s to provisionDockerMachine
	I1020 12:43:28.325466  282174 start.go:293] postStartSetup for "embed-certs-907116" (driver="docker")
	I1020 12:43:28.325562  282174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:43:28.325633  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:43:28.325679  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.348002  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:28.449997  282174 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:43:28.454678  282174 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:43:28.454714  282174 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:43:28.454727  282174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:43:28.454870  282174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:43:28.454986  282174 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:43:28.455122  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:43:28.463446  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:43:28.481909  282174 start.go:296] duration metric: took 156.427219ms for postStartSetup
	I1020 12:43:28.481988  282174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:43:28.482045  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.503288  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:28.601973  282174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:43:28.606607  282174 fix.go:56] duration metric: took 4.708699618s for fixHost
	I1020 12:43:28.606631  282174 start.go:83] releasing machines lock for "embed-certs-907116", held for 4.708743183s
	I1020 12:43:28.606697  282174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-907116
	I1020 12:43:28.626869  282174 ssh_runner.go:195] Run: cat /version.json
	I1020 12:43:28.626941  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.626987  282174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:43:28.627061  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.648328  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:28.649963  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:27.319141  275397 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:27.319178  275397 system_pods.go:89] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:27.319186  275397 system_pods.go:89] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:27.319194  275397 system_pods.go:89] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:27.319200  275397 system_pods.go:89] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:27.319205  275397 system_pods.go:89] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:27.319212  275397 system_pods.go:89] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:27.319217  275397 system_pods.go:89] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:27.319225  275397 system_pods.go:89] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:43:27.319242  275397 retry.go:31] will retry after 248.682111ms: missing components: kube-dns
	I1020 12:43:27.572329  275397 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:27.572365  275397 system_pods.go:89] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Running
	I1020 12:43:27.572376  275397 system_pods.go:89] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:27.572381  275397 system_pods.go:89] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:27.572386  275397 system_pods.go:89] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:27.572393  275397 system_pods.go:89] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:27.572403  275397 system_pods.go:89] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:27.572409  275397 system_pods.go:89] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:27.572418  275397 system_pods.go:89] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Running
	I1020 12:43:27.572429  275397 system_pods.go:126] duration metric: took 647.473655ms to wait for k8s-apps to be running ...
	I1020 12:43:27.572442  275397 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:43:27.572492  275397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:27.585966  275397 system_svc.go:56] duration metric: took 13.513469ms WaitForService to wait for kubelet
	I1020 12:43:27.585999  275397 kubeadm.go:586] duration metric: took 12.474864066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:43:27.586024  275397 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:43:27.589080  275397 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:43:27.589106  275397 node_conditions.go:123] node cpu capacity is 8
	I1020 12:43:27.589122  275397 node_conditions.go:105] duration metric: took 3.092958ms to run NodePressure ...
	I1020 12:43:27.589136  275397 start.go:241] waiting for startup goroutines ...
	I1020 12:43:27.589145  275397 start.go:246] waiting for cluster config update ...
	I1020 12:43:27.589160  275397 start.go:255] writing updated cluster config ...
	I1020 12:43:27.589457  275397 ssh_runner.go:195] Run: rm -f paused
	I1020 12:43:27.593486  275397 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:27.672185  275397 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rgfv9" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.676419  275397 pod_ready.go:94] pod "coredns-66bc5c9577-rgfv9" is "Ready"
	I1020 12:43:27.676442  275397 pod_ready.go:86] duration metric: took 4.229037ms for pod "coredns-66bc5c9577-rgfv9" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.678380  275397 pod_ready.go:83] waiting for pod "etcd-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.681811  275397 pod_ready.go:94] pod "etcd-auto-312375" is "Ready"
	I1020 12:43:27.681832  275397 pod_ready.go:86] duration metric: took 3.43447ms for pod "etcd-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.683534  275397 pod_ready.go:83] waiting for pod "kube-apiserver-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.687220  275397 pod_ready.go:94] pod "kube-apiserver-auto-312375" is "Ready"
	I1020 12:43:27.687240  275397 pod_ready.go:86] duration metric: took 3.685775ms for pod "kube-apiserver-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.689089  275397 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.997592  275397 pod_ready.go:94] pod "kube-controller-manager-auto-312375" is "Ready"
	I1020 12:43:27.997616  275397 pod_ready.go:86] duration metric: took 308.508006ms for pod "kube-controller-manager-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:28.197897  275397 pod_ready.go:83] waiting for pod "kube-proxy-xs7qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:28.596622  275397 pod_ready.go:94] pod "kube-proxy-xs7qd" is "Ready"
	I1020 12:43:28.596649  275397 pod_ready.go:86] duration metric: took 398.721669ms for pod "kube-proxy-xs7qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:28.797764  275397 pod_ready.go:83] waiting for pod "kube-scheduler-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:29.197752  275397 pod_ready.go:94] pod "kube-scheduler-auto-312375" is "Ready"
	I1020 12:43:29.197805  275397 pod_ready.go:86] duration metric: took 399.99361ms for pod "kube-scheduler-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:29.197821  275397 pod_ready.go:40] duration metric: took 1.604309838s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:29.249343  275397 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:43:29.251344  275397 out.go:179] * Done! kubectl is now configured to use "auto-312375" cluster and "default" namespace by default
	I1020 12:43:28.801244  282174 ssh_runner.go:195] Run: systemctl --version
	I1020 12:43:28.807957  282174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:43:28.842977  282174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:43:28.847963  282174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:43:28.848035  282174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:43:28.856212  282174 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:43:28.856238  282174 start.go:495] detecting cgroup driver to use...
	I1020 12:43:28.856270  282174 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:43:28.856304  282174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:43:28.870914  282174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:43:28.883680  282174 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:43:28.883734  282174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:43:28.898417  282174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:43:28.911087  282174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:43:28.997504  282174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:43:29.082457  282174 docker.go:234] disabling docker service ...
	I1020 12:43:29.082513  282174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:43:29.097283  282174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:43:29.110809  282174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:43:29.196658  282174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:43:29.284286  282174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:43:29.298139  282174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:43:29.314539  282174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:43:29.314599  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.324387  282174 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:43:29.324459  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.334972  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.344475  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.354330  282174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:43:29.364396  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.376614  282174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.386393  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.396462  282174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:43:29.404612  282174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:43:29.412821  282174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:43:29.505899  282174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:43:29.622384  282174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:43:29.622457  282174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:43:29.627186  282174 start.go:563] Will wait 60s for crictl version
	I1020 12:43:29.627269  282174 ssh_runner.go:195] Run: which crictl
	I1020 12:43:29.631624  282174 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:43:29.659500  282174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:43:29.659582  282174 ssh_runner.go:195] Run: crio --version
	I1020 12:43:29.696494  282174 ssh_runner.go:195] Run: crio --version
	I1020 12:43:29.729019  282174 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:43:29.731645  282174 cli_runner.go:164] Run: docker network inspect embed-certs-907116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:43:29.753219  282174 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 12:43:29.757811  282174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:43:29.768575  282174 kubeadm.go:883] updating cluster {Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:43:29.768695  282174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:43:29.768741  282174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:43:29.799720  282174 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:43:29.799743  282174 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:43:29.799818  282174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:43:29.826522  282174 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:43:29.826547  282174 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:43:29.826555  282174 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 12:43:29.826665  282174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-907116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:43:29.826745  282174 ssh_runner.go:195] Run: crio config
	I1020 12:43:29.872225  282174 cni.go:84] Creating CNI manager for ""
	I1020 12:43:29.872246  282174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:43:29.872263  282174 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:43:29.872299  282174 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-907116 NodeName:embed-certs-907116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:43:29.872454  282174 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-907116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:43:29.872520  282174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:43:29.880747  282174 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:43:29.880820  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:43:29.888269  282174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1020 12:43:29.900673  282174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:43:29.913365  282174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1020 12:43:29.926174  282174 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:43:29.929759  282174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:43:29.940496  282174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:43:30.028187  282174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:43:30.053128  282174 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116 for IP: 192.168.76.2
	I1020 12:43:30.053153  282174 certs.go:195] generating shared ca certs ...
	I1020 12:43:30.053172  282174 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:30.053336  282174 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:43:30.053385  282174 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:43:30.053399  282174 certs.go:257] generating profile certs ...
	I1020 12:43:30.053506  282174 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/client.key
	I1020 12:43:30.053592  282174 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/apiserver.key.e2821edb
	I1020 12:43:30.053646  282174 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/proxy-client.key
	I1020 12:43:30.053816  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:43:30.053860  282174 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:43:30.053873  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:43:30.053916  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:43:30.053946  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:43:30.053981  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:43:30.054035  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:43:30.054879  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:43:30.074599  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:43:30.094897  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:43:30.115066  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:43:30.139675  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1020 12:43:30.158019  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:43:30.175542  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:43:30.194482  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:43:30.213197  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:43:30.231586  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:43:30.250353  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:43:30.269449  282174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:43:30.287878  282174 ssh_runner.go:195] Run: openssl version
	I1020 12:43:30.296627  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:43:30.309378  282174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:43:30.315683  282174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:43:30.315850  282174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:43:30.374507  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:43:30.388079  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:43:30.402319  282174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:43:30.409407  282174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:43:30.409470  282174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:43:30.469989  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:43:30.482448  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:43:30.496730  282174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:43:30.503084  282174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:43:30.503148  282174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:43:30.563976  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:43:30.576087  282174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:43:30.585162  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:43:30.649887  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:43:30.706329  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:43:30.773997  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:43:30.834884  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:43:30.888384  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:43:30.931270  282174 kubeadm.go:400] StartCluster: {Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:43:30.931392  282174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:43:30.931474  282174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:43:30.974739  282174 cri.go:89] found id: "71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777"
	I1020 12:43:30.974765  282174 cri.go:89] found id: "b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e"
	I1020 12:43:30.974781  282174 cri.go:89] found id: "22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6"
	I1020 12:43:30.974786  282174 cri.go:89] found id: "c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01"
	I1020 12:43:30.974798  282174 cri.go:89] found id: ""
	I1020 12:43:30.974858  282174 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:43:30.992361  282174 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:30Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:43:30.992444  282174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:43:31.003886  282174 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:43:31.003909  282174 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:43:31.003957  282174 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:43:31.014212  282174 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:43:31.015172  282174 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-907116" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:43:31.015673  282174 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-907116" cluster setting kubeconfig missing "embed-certs-907116" context setting]
	I1020 12:43:31.016485  282174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:31.018425  282174 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:43:31.028743  282174 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 12:43:31.028811  282174 kubeadm.go:601] duration metric: took 24.895322ms to restartPrimaryControlPlane
	I1020 12:43:31.028823  282174 kubeadm.go:402] duration metric: took 97.565027ms to StartCluster
	I1020 12:43:31.028848  282174 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:31.028921  282174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:43:31.030854  282174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:31.031070  282174 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:43:31.031218  282174 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:43:31.031315  282174 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-907116"
	I1020 12:43:31.031333  282174 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-907116"
	W1020 12:43:31.031341  282174 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:43:31.031347  282174 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:31.031371  282174 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:43:31.031380  282174 addons.go:69] Setting dashboard=true in profile "embed-certs-907116"
	I1020 12:43:31.031388  282174 addons.go:238] Setting addon dashboard=true in "embed-certs-907116"
	W1020 12:43:31.031394  282174 addons.go:247] addon dashboard should already be in state true
	I1020 12:43:31.031411  282174 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:43:31.031493  282174 addons.go:69] Setting default-storageclass=true in profile "embed-certs-907116"
	I1020 12:43:31.031520  282174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-907116"
	I1020 12:43:31.031854  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.031860  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.032041  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.033825  282174 out.go:179] * Verifying Kubernetes components...
	I1020 12:43:31.035307  282174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:43:31.063304  282174 addons.go:238] Setting addon default-storageclass=true in "embed-certs-907116"
	W1020 12:43:31.063327  282174 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:43:31.063368  282174 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:43:31.063707  282174 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 12:43:31.063713  282174 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:43:31.064083  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.069024  282174 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:43:31.069100  282174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:43:31.069184  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:31.073817  282174 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 12:43:28.101850  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:28.102297  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:28.102354  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:28.102412  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:28.130571  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:28.130595  236655 cri.go:89] found id: ""
	I1020 12:43:28.130603  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:28.130659  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:28.134635  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:28.134693  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:28.161034  236655 cri.go:89] found id: ""
	I1020 12:43:28.161061  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.161068  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:28.161081  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:28.161128  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:28.188260  236655 cri.go:89] found id: ""
	I1020 12:43:28.188288  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.188299  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:28.188306  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:28.188366  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:28.216717  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:28.216745  236655 cri.go:89] found id: ""
	I1020 12:43:28.216754  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:28.216826  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:28.220831  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:28.220901  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:28.248167  236655 cri.go:89] found id: ""
	I1020 12:43:28.248193  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.248202  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:28.248212  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:28.248268  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:28.277447  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:28.277470  236655 cri.go:89] found id: ""
	I1020 12:43:28.277479  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:28.277538  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:28.281830  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:28.281894  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:28.311162  236655 cri.go:89] found id: ""
	I1020 12:43:28.311192  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.311202  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:28.311210  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:28.311266  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:28.344270  236655 cri.go:89] found id: ""
	I1020 12:43:28.344297  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.344307  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:28.344318  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:28.344334  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:28.400565  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:28.400592  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:28.426481  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:28.426506  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:28.490973  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:28.491006  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:28.523917  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:28.523951  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:28.615507  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:28.615537  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:28.634269  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:28.634308  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:28.695892  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:28.695921  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:28.695936  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:31.075254  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 12:43:31.075282  282174 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 12:43:31.075354  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:31.095613  282174 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:43:31.095826  282174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:43:31.096111  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:31.109667  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:31.116148  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:31.132473  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:31.218496  282174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:43:31.241456  282174 node_ready.go:35] waiting up to 6m0s for node "embed-certs-907116" to be "Ready" ...
	I1020 12:43:31.242727  282174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:43:31.250154  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 12:43:31.250193  282174 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 12:43:31.264712  282174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:43:31.278252  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 12:43:31.278384  282174 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 12:43:31.302465  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 12:43:31.302489  282174 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 12:43:31.330913  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 12:43:31.330937  282174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 12:43:31.362424  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 12:43:31.362453  282174 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 12:43:31.384748  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 12:43:31.384815  282174 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 12:43:31.404481  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 12:43:31.404515  282174 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 12:43:31.423735  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 12:43:31.423793  282174 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 12:43:31.441461  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:43:31.441489  282174 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 12:43:31.459848  282174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:43:32.877172  282174 node_ready.go:49] node "embed-certs-907116" is "Ready"
	I1020 12:43:32.877218  282174 node_ready.go:38] duration metric: took 1.635725927s for node "embed-certs-907116" to be "Ready" ...
	I1020 12:43:32.877239  282174 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:43:32.877296  282174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:43:33.427798  282174 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.185023352s)
	I1020 12:43:33.427846  282174 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.163101115s)
	I1020 12:43:33.427966  282174 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.968072821s)
	I1020 12:43:33.427985  282174 api_server.go:72] duration metric: took 2.396892657s to wait for apiserver process to appear ...
	I1020 12:43:33.427999  282174 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:43:33.428016  282174 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:43:33.430371  282174 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-907116 addons enable metrics-server
	
	I1020 12:43:33.435517  282174 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:43:33.435547  282174 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:43:33.442811  282174 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1020 12:43:33.444379  282174 addons.go:514] duration metric: took 2.413171026s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 12:43:31.229937  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:31.230353  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:31.230398  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:31.230445  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:31.275632  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:31.275654  236655 cri.go:89] found id: ""
	I1020 12:43:31.275664  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:31.275718  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:31.281509  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:31.281600  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:31.329695  236655 cri.go:89] found id: ""
	I1020 12:43:31.329844  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.329863  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:31.329871  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:31.329937  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:31.367481  236655 cri.go:89] found id: ""
	I1020 12:43:31.367510  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.367521  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:31.367529  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:31.367586  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:31.407242  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:31.407281  236655 cri.go:89] found id: ""
	I1020 12:43:31.407290  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:31.407376  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:31.412185  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:31.412257  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:31.450867  236655 cri.go:89] found id: ""
	I1020 12:43:31.450901  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.450912  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:31.450919  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:31.450978  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:31.488614  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:31.488646  236655 cri.go:89] found id: ""
	I1020 12:43:31.488655  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:31.488719  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:31.495664  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:31.495927  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:31.535426  236655 cri.go:89] found id: ""
	I1020 12:43:31.535550  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.535564  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:31.535571  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:31.535633  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:31.573067  236655 cri.go:89] found id: ""
	I1020 12:43:31.573095  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.573105  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:31.573116  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:31.573134  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:31.590595  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:31.590623  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:31.671142  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:31.671168  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:31.671184  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:31.721051  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:31.721086  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:31.812207  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:31.812319  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:31.843512  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:31.843552  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:31.919396  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:31.919440  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:31.968423  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:31.968455  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:34.594038  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:34.594467  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:34.594523  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:34.594577  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:34.622255  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:34.622276  236655 cri.go:89] found id: ""
	I1020 12:43:34.622283  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:34.622332  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:34.626360  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:34.626434  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:34.652754  236655 cri.go:89] found id: ""
	I1020 12:43:34.652802  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.652814  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:34.652822  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:34.652887  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:34.680174  236655 cri.go:89] found id: ""
	I1020 12:43:34.680196  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.680204  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:34.680209  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:34.680264  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:34.706480  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:34.706506  236655 cri.go:89] found id: ""
	I1020 12:43:34.706515  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:34.706579  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:34.710698  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:34.710768  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:34.737648  236655 cri.go:89] found id: ""
	I1020 12:43:34.737678  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.737689  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:34.737697  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:34.737756  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:34.764563  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:34.764590  236655 cri.go:89] found id: ""
	I1020 12:43:34.764602  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:34.764666  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:34.768542  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:34.768602  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:34.793986  236655 cri.go:89] found id: ""
	I1020 12:43:34.794008  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.794015  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:34.794021  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:34.794088  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:34.821499  236655 cri.go:89] found id: ""
	I1020 12:43:34.821525  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.821532  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:34.821541  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:34.821553  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:34.835962  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:34.835990  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:34.891744  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:34.891766  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:34.891798  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:34.928604  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:34.928642  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:34.994662  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:34.994705  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:35.025651  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:35.025683  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:35.083732  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:35.083828  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:35.116172  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:35.116200  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:33.928476  282174 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:43:33.933279  282174 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:43:33.933325  282174 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:43:34.428927  282174 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:43:34.433179  282174 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 12:43:34.434171  282174 api_server.go:141] control plane version: v1.34.1
	I1020 12:43:34.434194  282174 api_server.go:131] duration metric: took 1.006189688s to wait for apiserver health ...
	I1020 12:43:34.434202  282174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:43:34.437536  282174 system_pods.go:59] 8 kube-system pods found
	I1020 12:43:34.437566  282174 system_pods.go:61] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:34.437574  282174 system_pods.go:61] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:43:34.437580  282174 system_pods.go:61] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:43:34.437587  282174 system_pods.go:61] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:43:34.437595  282174 system_pods.go:61] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:43:34.437605  282174 system_pods.go:61] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:43:34.437613  282174 system_pods.go:61] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:43:34.437619  282174 system_pods.go:61] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Running
	I1020 12:43:34.437631  282174 system_pods.go:74] duration metric: took 3.422035ms to wait for pod list to return data ...
	I1020 12:43:34.437641  282174 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:43:34.440279  282174 default_sa.go:45] found service account: "default"
	I1020 12:43:34.440305  282174 default_sa.go:55] duration metric: took 2.656969ms for default service account to be created ...
	I1020 12:43:34.440316  282174 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:43:34.443019  282174 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:34.443051  282174 system_pods.go:89] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:34.443066  282174 system_pods.go:89] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:43:34.443074  282174 system_pods.go:89] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:43:34.443084  282174 system_pods.go:89] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:43:34.443095  282174 system_pods.go:89] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:43:34.443106  282174 system_pods.go:89] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:43:34.443126  282174 system_pods.go:89] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:43:34.443136  282174 system_pods.go:89] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Running
	I1020 12:43:34.443145  282174 system_pods.go:126] duration metric: took 2.82209ms to wait for k8s-apps to be running ...
	I1020 12:43:34.443155  282174 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:43:34.443208  282174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:34.456643  282174 system_svc.go:56] duration metric: took 13.479504ms WaitForService to wait for kubelet
	I1020 12:43:34.456671  282174 kubeadm.go:586] duration metric: took 3.425579918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:43:34.456692  282174 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:43:34.459787  282174 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:43:34.459828  282174 node_conditions.go:123] node cpu capacity is 8
	I1020 12:43:34.459844  282174 node_conditions.go:105] duration metric: took 3.146734ms to run NodePressure ...
	I1020 12:43:34.459856  282174 start.go:241] waiting for startup goroutines ...
	I1020 12:43:34.459864  282174 start.go:246] waiting for cluster config update ...
	I1020 12:43:34.459874  282174 start.go:255] writing updated cluster config ...
	I1020 12:43:34.460125  282174 ssh_runner.go:195] Run: rm -f paused
	I1020 12:43:34.464153  282174 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:34.467524  282174 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vpzk5" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:43:36.473348  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	W1020 12:43:38.474481  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	I1020 12:43:37.717844  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:37.718275  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:37.718329  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:37.718393  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:37.754323  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:37.754370  236655 cri.go:89] found id: ""
	I1020 12:43:37.754381  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:37.754449  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:37.759972  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:37.760041  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:37.795405  236655 cri.go:89] found id: ""
	I1020 12:43:37.795434  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.795443  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:37.795450  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:37.795508  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:37.829975  236655 cri.go:89] found id: ""
	I1020 12:43:37.830011  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.830022  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:37.830030  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:37.830093  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:37.871099  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:37.871127  236655 cri.go:89] found id: ""
	I1020 12:43:37.871137  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:37.871196  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:37.876285  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:37.876356  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:37.908690  236655 cri.go:89] found id: ""
	I1020 12:43:37.908718  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.908729  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:37.908737  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:37.908828  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:37.945866  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:37.945896  236655 cri.go:89] found id: ""
	I1020 12:43:37.945906  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:37.945965  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:37.951747  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:37.951885  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:37.991796  236655 cri.go:89] found id: ""
	I1020 12:43:37.991826  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.991836  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:37.991843  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:37.991904  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:38.029214  236655 cri.go:89] found id: ""
	I1020 12:43:38.029241  236655 logs.go:282] 0 containers: []
	W1020 12:43:38.029253  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:38.029264  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:38.029282  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:38.167135  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:38.167165  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:38.185949  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:38.185979  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:38.260207  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:38.260233  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:38.260248  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:38.303320  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:38.303350  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:38.388952  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:38.389034  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:38.426168  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:38.426196  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:38.511979  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:38.512017  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:41.054853  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:41.055292  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:41.055356  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:41.055408  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:41.093388  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:41.093472  236655 cri.go:89] found id: ""
	I1020 12:43:41.093483  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:41.093555  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:41.100624  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:41.100742  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:41.140030  236655 cri.go:89] found id: ""
	I1020 12:43:41.140056  236655 logs.go:282] 0 containers: []
	W1020 12:43:41.140067  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:41.140079  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:41.140138  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	
	
	==> CRI-O <==
	Oct 20 12:43:15 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:15.501870119Z" level=info msg="Started container" PID=1752 containerID=d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper id=870a4723-c4b7-49a8-b368-c05253c4a1e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71953c6975d23ba4f63b79aa41391cb750fc2ad4ca0ed33bb8b463268684827e
	Oct 20 12:43:15 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:15.578315325Z" level=info msg="Removing container: caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502" id=86d59558-a713-46c2-8caa-60631cb9cd2f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:15 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:15.592136835Z" level=info msg="Removed container caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=86d59558-a713-46c2-8caa-60631cb9cd2f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.602263574Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c576dcf-366b-49e7-9224-d6b6a81b4475 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.603191988Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01b51627-b656-4e5b-8379-834eeceb8309 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.60465722Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e3899b85-c46d-4ece-9b5d-5a9242bd3cb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.604811053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.60937039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.609514365Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/405c71fa53bba4d556ef1ad89650177a59713fc22ddff4754f7beba956854715/merged/etc/passwd: no such file or directory"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.609539177Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/405c71fa53bba4d556ef1ad89650177a59713fc22ddff4754f7beba956854715/merged/etc/group: no such file or directory"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.60986244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.633509834Z" level=info msg="Created container fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6: kube-system/storage-provisioner/storage-provisioner" id=e3899b85-c46d-4ece-9b5d-5a9242bd3cb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.63419485Z" level=info msg="Starting container: fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6" id=55132896-48c3-4cf0-9c38-2aa75662ea1a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.636642534Z" level=info msg="Started container" PID=1766 containerID=fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6 description=kube-system/storage-provisioner/storage-provisioner id=55132896-48c3-4cf0-9c38-2aa75662ea1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8818ed6ff5c0bb8393917e743e929ca94648618e7e6d01d3d0e351f3731115e9
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.456065688Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0db072bd-73cb-4b0d-a0fe-b7f5706f4d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.45703181Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=57b848bc-9edf-44d6-9c3b-ee71a1fbdd44 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.458069735Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=e089ea13-eba3-4e82-9987-bd092806a6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.458204274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.463655129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.464116041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.501352554Z" level=info msg="Created container 52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=e089ea13-eba3-4e82-9987-bd092806a6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.502067139Z" level=info msg="Starting container: 52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487" id=2316e1e0-c22a-461c-8685-50416423e397 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.504045441Z" level=info msg="Started container" PID=1802 containerID=52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper id=2316e1e0-c22a-461c-8685-50416423e397 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71953c6975d23ba4f63b79aa41391cb750fc2ad4ca0ed33bb8b463268684827e
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.642318223Z" level=info msg="Removing container: d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b" id=55697984-3d28-4ee6-95fe-8a975e14a035 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.652043282Z" level=info msg="Removed container d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=55697984-3d28-4ee6-95fe-8a975e14a035 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	52f945f0582ca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   71953c6975d23       dashboard-metrics-scraper-6ffb444bf9-sc769             kubernetes-dashboard
	fe371429b8834       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   8818ed6ff5c0b       storage-provisioner                                    kube-system
	997f5fb70cf17       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   4417a834810bd       kubernetes-dashboard-855c9754f9-p7w4b                  kubernetes-dashboard
	e03f2f95e6c14       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   2f74c867b0162       kindnet-jrv62                                          kube-system
	96ed2fb71faec       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   02fd34db040be       kube-proxy-bbw6k                                       kube-system
	07c72d8489055       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   332c5a97a043a       busybox                                                default
	7866a55261bf6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   e56c4790fd1b5       coredns-66bc5c9577-vd5sd                               kube-system
	949fa188399d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   8818ed6ff5c0b       storage-provisioner                                    kube-system
	950cf2bcf663d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   82195466f2f6d       kube-apiserver-default-k8s-diff-port-874012            kube-system
	361bbce2ef1da       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   40ee20c300d3a       kube-scheduler-default-k8s-diff-port-874012            kube-system
	4701f0f003c88       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   9eb180e70ba13       kube-controller-manager-default-k8s-diff-port-874012   kube-system
	7c78acc071dce       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   44bb79ac3b98d       etcd-default-k8s-diff-port-874012                      kube-system
	
	
	==> coredns [7866a55261bf64a5c5e00ff9934f5375450ec837c58b9e9ea122dbc5064839b2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48076 - 16981 "HINFO IN 3541001985407855094.5017519029671984671. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01334755s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-874012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-874012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=default-k8s-diff-port-874012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_41_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-874012
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:43:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-874012
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2780d33f-1af5-4f46-b321-ab4699252d20
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-vd5sd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-874012                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-jrv62                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-874012             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-874012    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-bbw6k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-874012             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sc769              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p7w4b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-874012 event: Registered Node default-k8s-diff-port-874012 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-874012 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-874012 event: Registered Node default-k8s-diff-port-874012 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [7c78acc071dce4799d081c9cd84fb7f3990161652fd814c617b6d088840d020a] <==
	{"level":"warn","ts":"2025-10-20T12:42:51.753981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.802587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-20T12:42:51.754521Z","caller":"traceutil/trace.go:172","msg":"trace[1407051675] transaction","detail":"{read_only:false; number_of_response:0; response_revision:446; }","duration":"269.639641ms","start":"2025-10-20T12:42:51.484874Z","end":"2025-10-20T12:42:51.754514Z","steps":["trace[1407051675] 'process raft request'  (duration: 269.304054ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:51.754543Z","caller":"traceutil/trace.go:172","msg":"trace[1601164554] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:445; }","duration":"271.379532ms","start":"2025-10-20T12:42:51.483155Z","end":"2025-10-20T12:42:51.754535Z","steps":["trace[1601164554] 'agreement among raft nodes before linearized reading'  (duration: 270.743126ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:51.754338Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-20T12:42:51.451425Z","time spent":"302.670429ms","remote":"127.0.0.1:42346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5087,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-874012\" mod_revision:391 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-874012\" value_size:5009 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-874012\" > >"}
	{"level":"warn","ts":"2025-10-20T12:42:51.754365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.275422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-20T12:42:51.754677Z","caller":"traceutil/trace.go:172","msg":"trace[1586996241] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:446; }","duration":"271.581923ms","start":"2025-10-20T12:42:51.483077Z","end":"2025-10-20T12:42:51.754659Z","steps":["trace[1586996241] 'agreement among raft nodes before linearized reading'  (duration: 270.946496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:51.931146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.09488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:4341"}
	{"level":"info","ts":"2025-10-20T12:42:51.931222Z","caller":"traceutil/trace.go:172","msg":"trace[2048949999] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:446; }","duration":"153.184365ms","start":"2025-10-20T12:42:51.778017Z","end":"2025-10-20T12:42:51.931201Z","steps":["trace[2048949999] 'agreement among raft nodes before linearized reading'  (duration: 87.742915ms)","trace[2048949999] 'range keys from in-memory index tree'  (duration: 65.229624ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:51.931167Z","caller":"traceutil/trace.go:172","msg":"trace[1244085729] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"168.599329ms","start":"2025-10-20T12:42:51.762553Z","end":"2025-10-20T12:42:51.931153Z","steps":["trace[1244085729] 'process raft request'  (duration: 103.312908ms)","trace[1244085729] 'compare'  (duration: 65.164791ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:51.932001Z","caller":"traceutil/trace.go:172","msg":"trace[112224890] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"163.05423ms","start":"2025-10-20T12:42:51.768931Z","end":"2025-10-20T12:42:51.931985Z","steps":["trace[112224890] 'process raft request'  (duration: 162.894389ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:52.065586Z","caller":"traceutil/trace.go:172","msg":"trace[1994445504] linearizableReadLoop","detail":"{readStateIndex:476; appliedIndex:476; }","duration":"121.137769ms","start":"2025-10-20T12:42:51.944421Z","end":"2025-10-20T12:42:52.065559Z","steps":["trace[1994445504] 'read index received'  (duration: 121.126343ms)","trace[1994445504] 'applied index is now lower than readState.Index'  (duration: 9.136µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.105524Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.063935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:52.105609Z","caller":"traceutil/trace.go:172","msg":"trace[614722426] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:448; }","duration":"161.169839ms","start":"2025-10-20T12:42:51.944412Z","end":"2025-10-20T12:42:52.105582Z","steps":["trace[614722426] 'agreement among raft nodes before linearized reading'  (duration: 121.222771ms)","trace[614722426] 'range keys from in-memory index tree'  (duration: 39.801152ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.105626Z","caller":"traceutil/trace.go:172","msg":"trace[1725100767] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"163.362451ms","start":"2025-10-20T12:42:51.942247Z","end":"2025-10-20T12:42:52.105610Z","steps":["trace[1725100767] 'process raft request'  (duration: 123.397811ms)","trace[1725100767] 'compare'  (duration: 39.833854ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.106367Z","caller":"traceutil/trace.go:172","msg":"trace[47872528] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"163.566151ms","start":"2025-10-20T12:42:51.942789Z","end":"2025-10-20T12:42:52.106355Z","steps":["trace[47872528] 'process raft request'  (duration: 163.445347ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:52.429127Z","caller":"traceutil/trace.go:172","msg":"trace[799609906] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:484; }","duration":"124.442116ms","start":"2025-10-20T12:42:52.304660Z","end":"2025-10-20T12:42:52.429102Z","steps":["trace[799609906] 'read index received'  (duration: 124.43235ms)","trace[799609906] 'applied index is now lower than readState.Index'  (duration: 7.523µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.603150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.466822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"warn","ts":"2025-10-20T12:42:52.603190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.932061ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789458942085856 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" value_size:867 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:52.603216Z","caller":"traceutil/trace.go:172","msg":"trace[1954402182] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:456; }","duration":"298.544054ms","start":"2025-10-20T12:42:52.304655Z","end":"2025-10-20T12:42:52.603199Z","steps":["trace[1954402182] 'agreement among raft nodes before linearized reading'  (duration: 124.527287ms)","trace[1954402182] 'range keys from in-memory index tree'  (duration: 173.834751ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.603344Z","caller":"traceutil/trace.go:172","msg":"trace[1103778544] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"298.780613ms","start":"2025-10-20T12:42:52.304543Z","end":"2025-10-20T12:42:52.603323Z","steps":["trace[1103778544] 'process raft request'  (duration: 124.661622ms)","trace[1103778544] 'compare'  (duration: 173.779097ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.603442Z","caller":"traceutil/trace.go:172","msg":"trace[395698756] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"289.739802ms","start":"2025-10-20T12:42:52.313687Z","end":"2025-10-20T12:42:52.603426Z","steps":["trace[395698756] 'process raft request'  (duration: 289.582863ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:52.893670Z","caller":"traceutil/trace.go:172","msg":"trace[493070206] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:490; }","duration":"139.398571ms","start":"2025-10-20T12:42:52.754247Z","end":"2025-10-20T12:42:52.893645Z","steps":["trace[493070206] 'read index received'  (duration: 139.387849ms)","trace[493070206] 'applied index is now lower than readState.Index'  (duration: 9.428µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.987261Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.984121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient\" limit:1 ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2025-10-20T12:42:52.987347Z","caller":"traceutil/trace.go:172","msg":"trace[1103536632] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient; range_end:; response_count:1; response_revision:462; }","duration":"233.087919ms","start":"2025-10-20T12:42:52.754243Z","end":"2025-10-20T12:42:52.987331Z","steps":["trace[1103536632] 'agreement among raft nodes before linearized reading'  (duration: 139.48403ms)","trace[1103536632] 'range keys from in-memory index tree'  (duration: 93.370956ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.987636Z","caller":"traceutil/trace.go:172","msg":"trace[962722829] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"233.721057ms","start":"2025-10-20T12:42:52.753900Z","end":"2025-10-20T12:42:52.987621Z","steps":["trace[962722829] 'process raft request'  (duration: 139.751101ms)","trace[962722829] 'compare'  (duration: 93.858933ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:43:42 up  1:26,  0 user,  load average: 3.97, 3.47, 2.28
	Linux default-k8s-diff-port-874012 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e03f2f95e6c14702b90f8c7799cdb5513504049e5e68dc0d01aace1a70f8e115] <==
	I1020 12:42:53.087584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:42:53.088207       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1020 12:42:53.088394       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:42:53.088414       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:42:53.088439       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:42:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:42:53.291533       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:42:53.291560       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:42:53.291591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:42:53.291918       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:42:53.783568       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:42:53.783607       1 metrics.go:72] Registering metrics
	I1020 12:42:53.783673       1 controller.go:711] "Syncing nftables rules"
	I1020 12:43:03.290966       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:03.291041       1 main.go:301] handling current node
	I1020 12:43:13.290928       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:13.290974       1 main.go:301] handling current node
	I1020 12:43:23.290957       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:23.290985       1 main.go:301] handling current node
	I1020 12:43:33.296871       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:33.296903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [950cf2bcf663da8ddc81ce889407cc48e3d12e5e1bd9be508b2b13a09017120c] <==
	I1020 12:42:51.036738       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:42:51.036753       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 12:42:51.037080       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:42:51.037991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:42:51.039304       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 12:42:51.041189       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:42:51.041464       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:42:51.041272       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 12:42:51.041593       1 policy_source.go:240] refreshing policies
	I1020 12:42:51.045580       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:42:51.048097       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:42:51.061334       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:42:51.074238       1 cache.go:39] Caches are synced for autoregister controller
	E1020 12:42:51.221704       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 12:42:51.378356       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:42:51.754954       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:42:51.941633       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:42:51.943374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:42:52.197477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:42:52.249095       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:42:53.010446       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.0.9"}
	I1020 12:42:53.023527       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.254.76"}
	I1020 12:42:55.359000       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:42:55.760797       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:42:55.909890       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4701f0f003c887f114d5da2a88fc8b6767f57ea38df31b2ec658e6f9e2ca07df] <==
	I1020 12:42:55.311910       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 12:42:55.311918       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 12:42:55.312981       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:55.319344       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:42:55.321622       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 12:42:55.325938       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 12:42:55.355514       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:42:55.355542       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:42:55.355555       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:42:55.355514       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:42:55.355743       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:42:55.355748       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:42:55.355895       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:42:55.355937       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:42:55.355961       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:42:55.357332       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 12:42:55.360047       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:42:55.364873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:55.367129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:42:55.367149       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:42:55.367160       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:42:55.369303       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 12:42:55.371490       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 12:42:55.373759       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:42:55.384109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [96ed2fb71faeca4bae41804a971903dfe647f4945e3ac5a8e2c2c362359f0919] <==
	I1020 12:42:52.949037       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:42:53.006059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:42:53.106159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:42:53.106197       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1020 12:42:53.106295       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:42:53.137425       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:42:53.137496       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:42:53.145496       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:42:53.146062       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:42:53.146115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:53.147690       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:42:53.147761       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:42:53.148086       1 config.go:200] "Starting service config controller"
	I1020 12:42:53.148202       1 config.go:309] "Starting node config controller"
	I1020 12:42:53.148352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:42:53.148233       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:42:53.147764       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:42:53.148505       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:42:53.248555       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:42:53.248578       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:42:53.248608       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:42:53.249709       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [361bbce2ef1dab79033c19296471736ded91254dc81373034fb69f4e8ab8a98c] <==
	I1020 12:42:49.940103       1 serving.go:386] Generated self-signed cert in-memory
	I1020 12:42:51.003016       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:42:51.003041       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:51.007642       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 12:42:51.007653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:51.007651       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:42:51.007689       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:51.007700       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:42:51.007691       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 12:42:51.007929       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:42:51.007949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:42:51.107955       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 12:42:51.107962       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:42:51.108193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:42:56 default-k8s-diff-port-874012 kubelet[719]: I1020 12:42:56.109760     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb7l\" (UniqueName: \"kubernetes.io/projected/5bed4e77-d51d-4392-adf0-69a3e5538205-kube-api-access-pqb7l\") pod \"kubernetes-dashboard-855c9754f9-p7w4b\" (UID: \"5bed4e77-d51d-4392-adf0-69a3e5538205\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p7w4b"
	Oct 20 12:42:59 default-k8s-diff-port-874012 kubelet[719]: I1020 12:42:59.525992     719 scope.go:117] "RemoveContainer" containerID="048972c342cb6435492b54fcd19cd646a2fa14d3f0f885fa877001293b3efa62"
	Oct 20 12:43:00 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:00.531317     719 scope.go:117] "RemoveContainer" containerID="048972c342cb6435492b54fcd19cd646a2fa14d3f0f885fa877001293b3efa62"
	Oct 20 12:43:00 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:00.531660     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:00 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:00.531874     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:01 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:01.533994     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:01 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:01.534207     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:02 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:02.538652     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:02 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:02.538924     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:02 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:02.550710     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p7w4b" podStartSLOduration=2.152596274 podStartE2EDuration="7.550688417s" podCreationTimestamp="2025-10-20 12:42:55 +0000 UTC" firstStartedPulling="2025-10-20 12:42:56.321988802 +0000 UTC m=+7.976333795" lastFinishedPulling="2025-10-20 12:43:01.72008096 +0000 UTC m=+13.374425938" observedRunningTime="2025-10-20 12:43:02.550457891 +0000 UTC m=+14.204802890" watchObservedRunningTime="2025-10-20 12:43:02.550688417 +0000 UTC m=+14.205033416"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:15.455312     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:15.575766     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:15.576238     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:15.576460     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:21 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:21.798143     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:21 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:21.798349     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:23 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:23.601858     719 scope.go:117] "RemoveContainer" containerID="949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:37.455587     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:37.639445     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:37.639756     719 scope.go:117] "RemoveContainer" containerID="52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:37.639984     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: kubelet.service: Consumed 1.806s CPU time.
	
	
	==> kubernetes-dashboard [997f5fb70cf17401f9f118f22b72542195a6fa932ca73033e3cb05b2879ccce7] <==
	2025/10/20 12:43:01 Starting overwatch
	2025/10/20 12:43:01 Using namespace: kubernetes-dashboard
	2025/10/20 12:43:01 Using in-cluster config to connect to apiserver
	2025/10/20 12:43:01 Using secret token for csrf signing
	2025/10/20 12:43:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:43:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:43:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:43:01 Generating JWE encryption key
	2025/10/20 12:43:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:43:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:43:01 Initializing JWE encryption key from synchronized object
	2025/10/20 12:43:01 Creating in-cluster Sidecar client
	2025/10/20 12:43:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:43:01 Serving insecurely on HTTP port: 9090
	2025/10/20 12:43:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f] <==
	I1020 12:42:52.728896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:43:22.733152       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6] <==
	I1020 12:43:23.649124       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:43:23.657579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:43:23.657618       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:43:23.659847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:27.115839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:31.377873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:34.977160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:38.031198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:41.053878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:41.060130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:43:41.060297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:43:41.060443       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"992be66e-ad31-4768-ae4d-5fe58274f9ef", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-874012_f053c512-15c3-436e-bb0c-1f95987eafed became leader
	I1020 12:43:41.060551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-874012_f053c512-15c3-436e-bb0c-1f95987eafed!
	W1020 12:43:41.063443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:41.070287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:43:41.161511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-874012_f053c512-15c3-436e-bb0c-1f95987eafed!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012: exit status 2 (340.733166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-874012
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-874012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7",
	        "Created": "2025-10-20T12:41:38.524846166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:42:41.663509299Z",
	            "FinishedAt": "2025-10-20T12:42:40.613725086Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/hosts",
	        "LogPath": "/var/lib/docker/containers/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7/fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7-json.log",
	        "Name": "/default-k8s-diff-port-874012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-874012:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-874012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbc9ff1c79c14077e2a2fbe4229075830b659a3900399f7779ede049223e2ab7",
	                "LowerDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23ccaa2c1eba589d41d5dd53dc9f8e4141a1f29a896f0142badd4af6e87805d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-874012",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-874012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-874012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-874012",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-874012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6963550a8b83a3161c6af9b71432f46dac540327d6a58054f3fd22889d90e2c0",
	            "SandboxKey": "/var/run/docker/netns/6963550a8b83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-874012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:ca:e1:21:0e:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "071054924bdb32d774c4d0c0f3c167909dde1b983fbdc59f24f908b03d171adf",
	                    "EndpointID": "bade128faf5d2063cbd63ac376020bf9b21a6d2a73466d75d4d193e39ba48bcc",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-874012",
	                        "fbc9ff1c79c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
E1020 12:43:43.468865   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012: exit status 2 (319.643333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-874012 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-874012 logs -n 25: (1.229381874s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p cert-expiration-365628                                                                                                                                                                                                                     │ cert-expiration-365628       │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p disable-driver-mounts-796609                                                                                                                                                                                                               │ disable-driver-mounts-796609 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-874012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-916479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-874012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ stop    │ -p newest-cni-916479 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-916479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ image   │ newest-cni-916479 image list --format=json                                                                                                                                                                                                    │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ pause   │ -p newest-cni-916479 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-874012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:43 UTC │
	│ delete  │ -p newest-cni-916479                                                                                                                                                                                                                          │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ delete  │ -p newest-cni-916479                                                                                                                                                                                                                          │ newest-cni-916479            │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:42 UTC │
	│ start   │ -p auto-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-312375                  │ jenkins │ v1.37.0 │ 20 Oct 25 12:42 UTC │ 20 Oct 25 12:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-907116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ stop    │ -p embed-certs-907116 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-907116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ start   │ -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-907116           │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 pgrep -a kubelet                                                                                                                                                                                                               │ auto-312375                  │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ image   │ default-k8s-diff-port-874012 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ pause   │ -p default-k8s-diff-port-874012 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-874012 │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:43:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:43:23.706101  282174 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:43:23.706205  282174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:23.706212  282174 out.go:374] Setting ErrFile to fd 2...
	I1020 12:43:23.706225  282174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:23.706449  282174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:43:23.706935  282174 out.go:368] Setting JSON to false
	I1020 12:43:23.708227  282174 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5153,"bootTime":1760959051,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:43:23.708330  282174 start.go:141] virtualization: kvm guest
	I1020 12:43:23.710747  282174 out.go:179] * [embed-certs-907116] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:43:23.712519  282174 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:43:23.712541  282174 notify.go:220] Checking for updates...
	I1020 12:43:23.715514  282174 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:43:23.717095  282174 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:43:23.718463  282174 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:43:23.719947  282174 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:43:23.721420  282174 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:43:23.723309  282174 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:23.723838  282174 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:43:23.749724  282174 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:43:23.749840  282174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:43:23.809620  282174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:43:23.798648685 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:43:23.809728  282174 docker.go:318] overlay module found
	I1020 12:43:23.811599  282174 out.go:179] * Using the docker driver based on existing profile
	I1020 12:43:23.812865  282174 start.go:305] selected driver: docker
	I1020 12:43:23.812883  282174 start.go:925] validating driver "docker" against &{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:43:23.812962  282174 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:43:23.813549  282174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:43:23.870075  282174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:43:23.860331312 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:43:23.870333  282174 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:43:23.870359  282174 cni.go:84] Creating CNI manager for ""
	I1020 12:43:23.870404  282174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:43:23.870437  282174 start.go:349] cluster config:
	{Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:43:23.872452  282174 out.go:179] * Starting "embed-certs-907116" primary control-plane node in "embed-certs-907116" cluster
	I1020 12:43:23.873588  282174 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:43:23.874910  282174 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:43:23.876267  282174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:43:23.876315  282174 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:43:23.876318  282174 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:43:23.876421  282174 cache.go:58] Caching tarball of preloaded images
	I1020 12:43:23.876499  282174 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:43:23.876510  282174 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:43:23.876607  282174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json ...
	I1020 12:43:23.897722  282174 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:43:23.897741  282174 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:43:23.897757  282174 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:43:23.897810  282174 start.go:360] acquireMachinesLock for embed-certs-907116: {Name:mk081262f5d599396d0c232c9311858444bc2e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:43:23.897878  282174 start.go:364] duration metric: took 38.1µs to acquireMachinesLock for "embed-certs-907116"
	I1020 12:43:23.897896  282174 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:43:23.897901  282174 fix.go:54] fixHost starting: 
	I1020 12:43:23.898095  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:23.917316  282174 fix.go:112] recreateIfNeeded on embed-certs-907116: state=Stopped err=<nil>
	W1020 12:43:23.917345  282174 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:43:21.826902  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:21.827348  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:21.827396  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:21.827449  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:21.857399  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:21.857416  236655 cri.go:89] found id: ""
	I1020 12:43:21.857424  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:21.857473  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:21.861487  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:21.861549  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:21.888950  236655 cri.go:89] found id: ""
	I1020 12:43:21.888975  236655 logs.go:282] 0 containers: []
	W1020 12:43:21.888985  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:21.888991  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:21.889102  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:21.916702  236655 cri.go:89] found id: ""
	I1020 12:43:21.916730  236655 logs.go:282] 0 containers: []
	W1020 12:43:21.916740  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:21.916746  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:21.916813  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:21.946607  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:21.946633  236655 cri.go:89] found id: ""
	I1020 12:43:21.946643  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:21.946702  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:21.951545  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:21.951616  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:21.980724  236655 cri.go:89] found id: ""
	I1020 12:43:21.980746  236655 logs.go:282] 0 containers: []
	W1020 12:43:21.980754  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:21.980760  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:21.980832  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:22.007635  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:22.007658  236655 cri.go:89] found id: "3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:43:22.007663  236655 cri.go:89] found id: ""
	I1020 12:43:22.007672  236655 logs.go:282] 2 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb]
	I1020 12:43:22.007732  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:22.011969  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:22.016043  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:22.016113  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:22.045288  236655 cri.go:89] found id: ""
	I1020 12:43:22.045319  236655 logs.go:282] 0 containers: []
	W1020 12:43:22.045330  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:22.045348  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:22.045403  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:22.075166  236655 cri.go:89] found id: ""
	I1020 12:43:22.075194  236655 logs.go:282] 0 containers: []
	W1020 12:43:22.075201  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:22.075216  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:22.075227  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:22.107132  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:22.107157  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:22.196060  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:22.196098  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:22.254612  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:22.254632  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:22.254646  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:22.289682  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:22.289716  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:22.343109  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:22.343142  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:22.372250  236655 logs.go:123] Gathering logs for kube-controller-manager [3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb] ...
	I1020 12:43:22.372282  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ff946ff1c15d09818100b1068429020c4c4981890cb831db1e505ab196b6edb"
	I1020 12:43:22.400377  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:22.400405  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:22.415787  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:22.415811  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:24.972831  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:24.973333  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:24.973384  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:24.973439  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:25.001992  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:25.002017  236655 cri.go:89] found id: ""
	I1020 12:43:25.002027  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:25.002096  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:25.006734  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:25.006815  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:25.033907  236655 cri.go:89] found id: ""
	I1020 12:43:25.033939  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.033950  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:25.033957  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:25.034024  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:25.062007  236655 cri.go:89] found id: ""
	I1020 12:43:25.062031  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.062045  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:25.062050  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:25.062109  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:25.090680  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:25.090699  236655 cri.go:89] found id: ""
	I1020 12:43:25.090708  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:25.090766  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:25.095189  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:25.095259  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:25.122855  236655 cri.go:89] found id: ""
	I1020 12:43:25.122881  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.122888  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:25.122894  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:25.122950  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:25.150747  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:25.150786  236655 cri.go:89] found id: ""
	I1020 12:43:25.150796  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:25.150855  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:25.154809  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:25.154876  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:25.181663  236655 cri.go:89] found id: ""
	I1020 12:43:25.181689  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.181697  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:25.181703  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:25.181758  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:25.208702  236655 cri.go:89] found id: ""
	I1020 12:43:25.208735  236655 logs.go:282] 0 containers: []
	W1020 12:43:25.208746  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:25.208757  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:25.208797  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:25.236136  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:25.236165  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:25.294014  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:25.294056  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:25.324895  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:25.324922  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:25.428345  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:25.428377  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:25.444408  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:25.444438  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:25.503440  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:25.503462  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:25.503479  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:25.541399  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:25.541432  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	W1020 12:43:21.581796  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	W1020 12:43:24.081803  272557 pod_ready.go:104] pod "coredns-66bc5c9577-vd5sd" is not "Ready", error: <nil>
	I1020 12:43:25.580597  272557 pod_ready.go:94] pod "coredns-66bc5c9577-vd5sd" is "Ready"
	I1020 12:43:25.580625  272557 pod_ready.go:86] duration metric: took 32.005357365s for pod "coredns-66bc5c9577-vd5sd" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.583091  272557 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.587013  272557 pod_ready.go:94] pod "etcd-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:25.587034  272557 pod_ready.go:86] duration metric: took 3.918216ms for pod "etcd-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.588790  272557 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.592449  272557 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:25.592483  272557 pod_ready.go:86] duration metric: took 3.662358ms for pod "kube-apiserver-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.594352  272557 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.779237  272557 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:25.779266  272557 pod_ready.go:86] duration metric: took 184.894574ms for pod "kube-controller-manager-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:25.978391  272557 pod_ready.go:83] waiting for pod "kube-proxy-bbw6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.378449  272557 pod_ready.go:94] pod "kube-proxy-bbw6k" is "Ready"
	I1020 12:43:26.378476  272557 pod_ready.go:86] duration metric: took 400.059178ms for pod "kube-proxy-bbw6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.578871  272557 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.978767  272557 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-874012" is "Ready"
	I1020 12:43:26.978825  272557 pod_ready.go:86] duration metric: took 399.922336ms for pod "kube-scheduler-default-k8s-diff-port-874012" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:26.978838  272557 pod_ready.go:40] duration metric: took 33.407934682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:27.027516  272557 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:43:27.029988  272557 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-874012" cluster and "default" namespace by default
	W1020 12:43:23.896589  275397 node_ready.go:57] node "auto-312375" has "Ready":"False" status (will retry)
	W1020 12:43:26.396467  275397 node_ready.go:57] node "auto-312375" has "Ready":"False" status (will retry)
	I1020 12:43:26.896574  275397 node_ready.go:49] node "auto-312375" is "Ready"
	I1020 12:43:26.896613  275397 node_ready.go:38] duration metric: took 11.503638268s for node "auto-312375" to be "Ready" ...
	I1020 12:43:26.896632  275397 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:43:26.896700  275397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:43:26.910084  275397 api_server.go:72] duration metric: took 11.798943592s to wait for apiserver process to appear ...
	I1020 12:43:26.910117  275397 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:43:26.910157  275397 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 12:43:26.915069  275397 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 12:43:26.916040  275397 api_server.go:141] control plane version: v1.34.1
	I1020 12:43:26.916067  275397 api_server.go:131] duration metric: took 5.942528ms to wait for apiserver health ...
	I1020 12:43:26.916077  275397 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:43:26.921411  275397 system_pods.go:59] 8 kube-system pods found
	I1020 12:43:26.921454  275397 system_pods.go:61] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:26.921469  275397 system_pods.go:61] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:26.921477  275397 system_pods.go:61] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:26.921491  275397 system_pods.go:61] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:26.921501  275397 system_pods.go:61] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:26.921506  275397 system_pods.go:61] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:26.921519  275397 system_pods.go:61] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:26.921526  275397 system_pods.go:61] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:43:26.921538  275397 system_pods.go:74] duration metric: took 5.453931ms to wait for pod list to return data ...
	I1020 12:43:26.921548  275397 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:43:26.924912  275397 default_sa.go:45] found service account: "default"
	I1020 12:43:26.924937  275397 default_sa.go:55] duration metric: took 3.383041ms for default service account to be created ...
	I1020 12:43:26.924947  275397 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:43:27.021004  275397 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:27.021061  275397 system_pods.go:89] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:27.021069  275397 system_pods.go:89] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:27.021076  275397 system_pods.go:89] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:27.021081  275397 system_pods.go:89] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:27.021087  275397 system_pods.go:89] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:27.021093  275397 system_pods.go:89] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:27.021099  275397 system_pods.go:89] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:27.021107  275397 system_pods.go:89] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:43:27.021136  275397 retry.go:31] will retry after 293.826364ms: missing components: kube-dns
	I1020 12:43:23.919270  282174 out.go:252] * Restarting existing docker container for "embed-certs-907116" ...
	I1020 12:43:23.919343  282174 cli_runner.go:164] Run: docker start embed-certs-907116
	I1020 12:43:24.172969  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:24.191198  282174 kic.go:430] container "embed-certs-907116" state is running.
	I1020 12:43:24.191657  282174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-907116
	I1020 12:43:24.210842  282174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/config.json ...
	I1020 12:43:24.211062  282174 machine.go:93] provisionDockerMachine start ...
	I1020 12:43:24.211122  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:24.229699  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:24.229966  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:24.229983  282174 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:43:24.230631  282174 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44900->127.0.0.1:33103: read: connection reset by peer
	I1020 12:43:27.378887  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-907116
	
	I1020 12:43:27.378916  282174 ubuntu.go:182] provisioning hostname "embed-certs-907116"
	I1020 12:43:27.378984  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:27.397329  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:27.397559  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:27.397573  282174 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-907116 && echo "embed-certs-907116" | sudo tee /etc/hostname
	I1020 12:43:27.550953  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-907116
	
	I1020 12:43:27.551037  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:27.570194  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:27.570489  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:27.570514  282174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-907116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-907116/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-907116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:43:27.715553  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:43:27.715583  282174 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:43:27.715619  282174 ubuntu.go:190] setting up certificates
	I1020 12:43:27.715629  282174 provision.go:84] configureAuth start
	I1020 12:43:27.715687  282174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-907116
	I1020 12:43:27.733741  282174 provision.go:143] copyHostCerts
	I1020 12:43:27.733829  282174 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:43:27.733849  282174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:43:27.733927  282174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:43:27.734020  282174 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:43:27.734035  282174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:43:27.734066  282174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:43:27.734171  282174 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:43:27.734183  282174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:43:27.734208  282174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:43:27.734256  282174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.embed-certs-907116 san=[127.0.0.1 192.168.76.2 embed-certs-907116 localhost minikube]
	I1020 12:43:27.811854  282174 provision.go:177] copyRemoteCerts
	I1020 12:43:27.811921  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:43:27.811961  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:27.830550  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:27.932830  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:43:27.951988  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:43:27.970519  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 12:43:27.988179  282174 provision.go:87] duration metric: took 272.535074ms to configureAuth
	I1020 12:43:27.988209  282174 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:43:27.988396  282174 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:27.988502  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.008448  282174 main.go:141] libmachine: Using SSH client type: native
	I1020 12:43:28.008782  282174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1020 12:43:28.008808  282174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:43:28.325424  282174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:43:28.325453  282174 machine.go:96] duration metric: took 4.114377236s to provisionDockerMachine
	I1020 12:43:28.325466  282174 start.go:293] postStartSetup for "embed-certs-907116" (driver="docker")
	I1020 12:43:28.325562  282174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:43:28.325633  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:43:28.325679  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.348002  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:28.449997  282174 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:43:28.454678  282174 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:43:28.454714  282174 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:43:28.454727  282174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:43:28.454870  282174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:43:28.454986  282174 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:43:28.455122  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:43:28.463446  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:43:28.481909  282174 start.go:296] duration metric: took 156.427219ms for postStartSetup
	I1020 12:43:28.481988  282174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:43:28.482045  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.503288  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:28.601973  282174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:43:28.606607  282174 fix.go:56] duration metric: took 4.708699618s for fixHost
	I1020 12:43:28.606631  282174 start.go:83] releasing machines lock for "embed-certs-907116", held for 4.708743183s
	I1020 12:43:28.606697  282174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-907116
	I1020 12:43:28.626869  282174 ssh_runner.go:195] Run: cat /version.json
	I1020 12:43:28.626941  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.626987  282174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:43:28.627061  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:28.648328  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:28.649963  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:27.319141  275397 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:27.319178  275397 system_pods.go:89] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:27.319186  275397 system_pods.go:89] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:27.319194  275397 system_pods.go:89] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:27.319200  275397 system_pods.go:89] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:27.319205  275397 system_pods.go:89] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:27.319212  275397 system_pods.go:89] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:27.319217  275397 system_pods.go:89] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:27.319225  275397 system_pods.go:89] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:43:27.319242  275397 retry.go:31] will retry after 248.682111ms: missing components: kube-dns
	I1020 12:43:27.572329  275397 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:27.572365  275397 system_pods.go:89] "coredns-66bc5c9577-rgfv9" [ca422c45-8ea1-4f70-bf28-7c3b25880131] Running
	I1020 12:43:27.572376  275397 system_pods.go:89] "etcd-auto-312375" [0ce79a2c-44b2-43de-90b6-9aa5c277f2dc] Running
	I1020 12:43:27.572381  275397 system_pods.go:89] "kindnet-mb9vw" [3822b15a-6712-4ed6-81b2-2f87ee8a91df] Running
	I1020 12:43:27.572386  275397 system_pods.go:89] "kube-apiserver-auto-312375" [7a5f06de-c051-4773-a515-51ffcea32b82] Running
	I1020 12:43:27.572393  275397 system_pods.go:89] "kube-controller-manager-auto-312375" [af5994aa-ebbf-4194-adae-55a350e0d6a0] Running
	I1020 12:43:27.572403  275397 system_pods.go:89] "kube-proxy-xs7qd" [cab32b63-1b59-4151-b817-ca71b2f21e33] Running
	I1020 12:43:27.572409  275397 system_pods.go:89] "kube-scheduler-auto-312375" [692558fd-9ac7-48f1-9724-560e2dbfc8c0] Running
	I1020 12:43:27.572418  275397 system_pods.go:89] "storage-provisioner" [37917c30-9182-4df7-bc90-29929fcdc209] Running
	I1020 12:43:27.572429  275397 system_pods.go:126] duration metric: took 647.473655ms to wait for k8s-apps to be running ...
	I1020 12:43:27.572442  275397 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:43:27.572492  275397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:27.585966  275397 system_svc.go:56] duration metric: took 13.513469ms WaitForService to wait for kubelet
	I1020 12:43:27.585999  275397 kubeadm.go:586] duration metric: took 12.474864066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:43:27.586024  275397 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:43:27.589080  275397 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:43:27.589106  275397 node_conditions.go:123] node cpu capacity is 8
	I1020 12:43:27.589122  275397 node_conditions.go:105] duration metric: took 3.092958ms to run NodePressure ...
	I1020 12:43:27.589136  275397 start.go:241] waiting for startup goroutines ...
	I1020 12:43:27.589145  275397 start.go:246] waiting for cluster config update ...
	I1020 12:43:27.589160  275397 start.go:255] writing updated cluster config ...
	I1020 12:43:27.589457  275397 ssh_runner.go:195] Run: rm -f paused
	I1020 12:43:27.593486  275397 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:27.672185  275397 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rgfv9" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.676419  275397 pod_ready.go:94] pod "coredns-66bc5c9577-rgfv9" is "Ready"
	I1020 12:43:27.676442  275397 pod_ready.go:86] duration metric: took 4.229037ms for pod "coredns-66bc5c9577-rgfv9" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.678380  275397 pod_ready.go:83] waiting for pod "etcd-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.681811  275397 pod_ready.go:94] pod "etcd-auto-312375" is "Ready"
	I1020 12:43:27.681832  275397 pod_ready.go:86] duration metric: took 3.43447ms for pod "etcd-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.683534  275397 pod_ready.go:83] waiting for pod "kube-apiserver-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.687220  275397 pod_ready.go:94] pod "kube-apiserver-auto-312375" is "Ready"
	I1020 12:43:27.687240  275397 pod_ready.go:86] duration metric: took 3.685775ms for pod "kube-apiserver-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.689089  275397 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:27.997592  275397 pod_ready.go:94] pod "kube-controller-manager-auto-312375" is "Ready"
	I1020 12:43:27.997616  275397 pod_ready.go:86] duration metric: took 308.508006ms for pod "kube-controller-manager-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:28.197897  275397 pod_ready.go:83] waiting for pod "kube-proxy-xs7qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:28.596622  275397 pod_ready.go:94] pod "kube-proxy-xs7qd" is "Ready"
	I1020 12:43:28.596649  275397 pod_ready.go:86] duration metric: took 398.721669ms for pod "kube-proxy-xs7qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:28.797764  275397 pod_ready.go:83] waiting for pod "kube-scheduler-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:29.197752  275397 pod_ready.go:94] pod "kube-scheduler-auto-312375" is "Ready"
	I1020 12:43:29.197805  275397 pod_ready.go:86] duration metric: took 399.99361ms for pod "kube-scheduler-auto-312375" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:43:29.197821  275397 pod_ready.go:40] duration metric: took 1.604309838s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:29.249343  275397 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:43:29.251344  275397 out.go:179] * Done! kubectl is now configured to use "auto-312375" cluster and "default" namespace by default
	I1020 12:43:28.801244  282174 ssh_runner.go:195] Run: systemctl --version
	I1020 12:43:28.807957  282174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:43:28.842977  282174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:43:28.847963  282174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:43:28.848035  282174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:43:28.856212  282174 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:43:28.856238  282174 start.go:495] detecting cgroup driver to use...
	I1020 12:43:28.856270  282174 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:43:28.856304  282174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:43:28.870914  282174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:43:28.883680  282174 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:43:28.883734  282174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:43:28.898417  282174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:43:28.911087  282174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:43:28.997504  282174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:43:29.082457  282174 docker.go:234] disabling docker service ...
	I1020 12:43:29.082513  282174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:43:29.097283  282174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:43:29.110809  282174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:43:29.196658  282174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:43:29.284286  282174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:43:29.298139  282174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:43:29.314539  282174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:43:29.314599  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.324387  282174 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:43:29.324459  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.334972  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.344475  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.354330  282174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:43:29.364396  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.376614  282174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.386393  282174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:43:29.396462  282174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:43:29.404612  282174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:43:29.412821  282174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:43:29.505899  282174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:43:29.622384  282174 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:43:29.622457  282174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:43:29.627186  282174 start.go:563] Will wait 60s for crictl version
	I1020 12:43:29.627269  282174 ssh_runner.go:195] Run: which crictl
	I1020 12:43:29.631624  282174 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:43:29.659500  282174 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:43:29.659582  282174 ssh_runner.go:195] Run: crio --version
	I1020 12:43:29.696494  282174 ssh_runner.go:195] Run: crio --version
	I1020 12:43:29.729019  282174 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:43:29.731645  282174 cli_runner.go:164] Run: docker network inspect embed-certs-907116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:43:29.753219  282174 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 12:43:29.757811  282174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:43:29.768575  282174 kubeadm.go:883] updating cluster {Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:43:29.768695  282174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:43:29.768741  282174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:43:29.799720  282174 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:43:29.799743  282174 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:43:29.799818  282174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:43:29.826522  282174 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:43:29.826547  282174 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:43:29.826555  282174 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 12:43:29.826665  282174 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-907116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:43:29.826745  282174 ssh_runner.go:195] Run: crio config
	I1020 12:43:29.872225  282174 cni.go:84] Creating CNI manager for ""
	I1020 12:43:29.872246  282174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:43:29.872263  282174 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:43:29.872299  282174 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-907116 NodeName:embed-certs-907116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:43:29.872454  282174 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-907116"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:43:29.872520  282174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:43:29.880747  282174 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:43:29.880820  282174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:43:29.888269  282174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1020 12:43:29.900673  282174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:43:29.913365  282174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1020 12:43:29.926174  282174 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:43:29.929759  282174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:43:29.940496  282174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:43:30.028187  282174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:43:30.053128  282174 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116 for IP: 192.168.76.2
	I1020 12:43:30.053153  282174 certs.go:195] generating shared ca certs ...
	I1020 12:43:30.053172  282174 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:30.053336  282174 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:43:30.053385  282174 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:43:30.053399  282174 certs.go:257] generating profile certs ...
	I1020 12:43:30.053506  282174 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/client.key
	I1020 12:43:30.053592  282174 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/apiserver.key.e2821edb
	I1020 12:43:30.053646  282174 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/proxy-client.key
	I1020 12:43:30.053816  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:43:30.053860  282174 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:43:30.053873  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:43:30.053916  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:43:30.053946  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:43:30.053981  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:43:30.054035  282174 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:43:30.054879  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:43:30.074599  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:43:30.094897  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:43:30.115066  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:43:30.139675  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1020 12:43:30.158019  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:43:30.175542  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:43:30.194482  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/embed-certs-907116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:43:30.213197  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:43:30.231586  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:43:30.250353  282174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:43:30.269449  282174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:43:30.287878  282174 ssh_runner.go:195] Run: openssl version
	I1020 12:43:30.296627  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:43:30.309378  282174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:43:30.315683  282174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:43:30.315850  282174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:43:30.374507  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:43:30.388079  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:43:30.402319  282174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:43:30.409407  282174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:43:30.409470  282174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:43:30.469989  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:43:30.482448  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:43:30.496730  282174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:43:30.503084  282174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:43:30.503148  282174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:43:30.563976  282174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:43:30.576087  282174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:43:30.585162  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:43:30.649887  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:43:30.706329  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:43:30.773997  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:43:30.834884  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:43:30.888384  282174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:43:30.931270  282174 kubeadm.go:400] StartCluster: {Name:embed-certs-907116 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-907116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:43:30.931392  282174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:43:30.931474  282174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:43:30.974739  282174 cri.go:89] found id: "71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777"
	I1020 12:43:30.974765  282174 cri.go:89] found id: "b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e"
	I1020 12:43:30.974781  282174 cri.go:89] found id: "22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6"
	I1020 12:43:30.974786  282174 cri.go:89] found id: "c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01"
	I1020 12:43:30.974798  282174 cri.go:89] found id: ""
	I1020 12:43:30.974858  282174 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:43:30.992361  282174 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:43:30Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:43:30.992444  282174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:43:31.003886  282174 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:43:31.003909  282174 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:43:31.003957  282174 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:43:31.014212  282174 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:43:31.015172  282174 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-907116" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:43:31.015673  282174 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-11075/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-907116" cluster setting kubeconfig missing "embed-certs-907116" context setting]
	I1020 12:43:31.016485  282174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:31.018425  282174 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:43:31.028743  282174 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 12:43:31.028811  282174 kubeadm.go:601] duration metric: took 24.895322ms to restartPrimaryControlPlane
	I1020 12:43:31.028823  282174 kubeadm.go:402] duration metric: took 97.565027ms to StartCluster
	I1020 12:43:31.028848  282174 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:31.028921  282174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:43:31.030854  282174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:43:31.031070  282174 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:43:31.031218  282174 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:43:31.031315  282174 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-907116"
	I1020 12:43:31.031333  282174 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-907116"
	W1020 12:43:31.031341  282174 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:43:31.031347  282174 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:31.031371  282174 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:43:31.031380  282174 addons.go:69] Setting dashboard=true in profile "embed-certs-907116"
	I1020 12:43:31.031388  282174 addons.go:238] Setting addon dashboard=true in "embed-certs-907116"
	W1020 12:43:31.031394  282174 addons.go:247] addon dashboard should already be in state true
	I1020 12:43:31.031411  282174 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:43:31.031493  282174 addons.go:69] Setting default-storageclass=true in profile "embed-certs-907116"
	I1020 12:43:31.031520  282174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-907116"
	I1020 12:43:31.031854  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.031860  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.032041  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.033825  282174 out.go:179] * Verifying Kubernetes components...
	I1020 12:43:31.035307  282174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:43:31.063304  282174 addons.go:238] Setting addon default-storageclass=true in "embed-certs-907116"
	W1020 12:43:31.063327  282174 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:43:31.063368  282174 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:43:31.063707  282174 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 12:43:31.063713  282174 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:43:31.064083  282174 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:43:31.069024  282174 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:43:31.069100  282174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:43:31.069184  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:31.073817  282174 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 12:43:28.101850  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:28.102297  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:28.102354  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:28.102412  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:28.130571  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:28.130595  236655 cri.go:89] found id: ""
	I1020 12:43:28.130603  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:28.130659  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:28.134635  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:28.134693  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:28.161034  236655 cri.go:89] found id: ""
	I1020 12:43:28.161061  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.161068  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:28.161081  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:28.161128  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:28.188260  236655 cri.go:89] found id: ""
	I1020 12:43:28.188288  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.188299  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:28.188306  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:28.188366  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:28.216717  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:28.216745  236655 cri.go:89] found id: ""
	I1020 12:43:28.216754  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:28.216826  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:28.220831  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:28.220901  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:28.248167  236655 cri.go:89] found id: ""
	I1020 12:43:28.248193  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.248202  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:28.248212  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:28.248268  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:28.277447  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:28.277470  236655 cri.go:89] found id: ""
	I1020 12:43:28.277479  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:28.277538  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:28.281830  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:28.281894  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:28.311162  236655 cri.go:89] found id: ""
	I1020 12:43:28.311192  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.311202  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:28.311210  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:28.311266  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:28.344270  236655 cri.go:89] found id: ""
	I1020 12:43:28.344297  236655 logs.go:282] 0 containers: []
	W1020 12:43:28.344307  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:28.344318  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:28.344334  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:28.400565  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:28.400592  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:28.426481  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:28.426506  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:28.490973  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:28.491006  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:28.523917  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:28.523951  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:28.615507  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:28.615537  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:28.634269  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:28.634308  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:28.695892  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:28.695921  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:28.695936  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:31.075254  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 12:43:31.075282  282174 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 12:43:31.075354  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:31.095613  282174 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:43:31.095826  282174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:43:31.096111  282174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:43:31.109667  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:31.116148  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:31.132473  282174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:43:31.218496  282174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:43:31.241456  282174 node_ready.go:35] waiting up to 6m0s for node "embed-certs-907116" to be "Ready" ...
	I1020 12:43:31.242727  282174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:43:31.250154  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 12:43:31.250193  282174 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 12:43:31.264712  282174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:43:31.278252  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 12:43:31.278384  282174 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 12:43:31.302465  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 12:43:31.302489  282174 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 12:43:31.330913  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 12:43:31.330937  282174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 12:43:31.362424  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 12:43:31.362453  282174 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 12:43:31.384748  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 12:43:31.384815  282174 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 12:43:31.404481  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 12:43:31.404515  282174 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 12:43:31.423735  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 12:43:31.423793  282174 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 12:43:31.441461  282174 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:43:31.441489  282174 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 12:43:31.459848  282174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 12:43:32.877172  282174 node_ready.go:49] node "embed-certs-907116" is "Ready"
	I1020 12:43:32.877218  282174 node_ready.go:38] duration metric: took 1.635725927s for node "embed-certs-907116" to be "Ready" ...
	I1020 12:43:32.877239  282174 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:43:32.877296  282174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:43:33.427798  282174 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.185023352s)
	I1020 12:43:33.427846  282174 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.163101115s)
	I1020 12:43:33.427966  282174 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.968072821s)
	I1020 12:43:33.427985  282174 api_server.go:72] duration metric: took 2.396892657s to wait for apiserver process to appear ...
	I1020 12:43:33.427999  282174 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:43:33.428016  282174 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:43:33.430371  282174 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-907116 addons enable metrics-server
	
	I1020 12:43:33.435517  282174 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:43:33.435547  282174 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:43:33.442811  282174 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1020 12:43:33.444379  282174 addons.go:514] duration metric: took 2.413171026s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 12:43:31.229937  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:31.230353  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:31.230398  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:31.230445  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:31.275632  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:31.275654  236655 cri.go:89] found id: ""
	I1020 12:43:31.275664  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:31.275718  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:31.281509  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:31.281600  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:31.329695  236655 cri.go:89] found id: ""
	I1020 12:43:31.329844  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.329863  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:31.329871  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:31.329937  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:31.367481  236655 cri.go:89] found id: ""
	I1020 12:43:31.367510  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.367521  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:31.367529  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:31.367586  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:31.407242  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:31.407281  236655 cri.go:89] found id: ""
	I1020 12:43:31.407290  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:31.407376  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:31.412185  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:31.412257  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:31.450867  236655 cri.go:89] found id: ""
	I1020 12:43:31.450901  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.450912  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:31.450919  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:31.450978  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:31.488614  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:31.488646  236655 cri.go:89] found id: ""
	I1020 12:43:31.488655  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:31.488719  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:31.495664  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:31.495927  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:31.535426  236655 cri.go:89] found id: ""
	I1020 12:43:31.535550  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.535564  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:31.535571  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:31.535633  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:31.573067  236655 cri.go:89] found id: ""
	I1020 12:43:31.573095  236655 logs.go:282] 0 containers: []
	W1020 12:43:31.573105  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:31.573116  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:31.573134  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:31.590595  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:31.590623  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:31.671142  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:31.671168  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:31.671184  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:31.721051  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:31.721086  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:31.812207  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:31.812319  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:31.843512  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:31.843552  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:31.919396  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:31.919440  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:31.968423  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:31.968455  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:34.594038  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:34.594467  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:34.594523  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:34.594577  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:34.622255  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:34.622276  236655 cri.go:89] found id: ""
	I1020 12:43:34.622283  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:34.622332  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:34.626360  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:34.626434  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:34.652754  236655 cri.go:89] found id: ""
	I1020 12:43:34.652802  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.652814  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:34.652822  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:34.652887  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:34.680174  236655 cri.go:89] found id: ""
	I1020 12:43:34.680196  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.680204  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:34.680209  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:34.680264  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:34.706480  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:34.706506  236655 cri.go:89] found id: ""
	I1020 12:43:34.706515  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:34.706579  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:34.710698  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:34.710768  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:34.737648  236655 cri.go:89] found id: ""
	I1020 12:43:34.737678  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.737689  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:34.737697  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:34.737756  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:34.764563  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:34.764590  236655 cri.go:89] found id: ""
	I1020 12:43:34.764602  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:34.764666  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:34.768542  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:34.768602  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:34.793986  236655 cri.go:89] found id: ""
	I1020 12:43:34.794008  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.794015  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:34.794021  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:34.794088  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:34.821499  236655 cri.go:89] found id: ""
	I1020 12:43:34.821525  236655 logs.go:282] 0 containers: []
	W1020 12:43:34.821532  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:34.821541  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:34.821553  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:34.835962  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:34.835990  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:34.891744  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:34.891766  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:34.891798  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:34.928604  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:34.928642  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:34.994662  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:34.994705  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:35.025651  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:35.025683  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:35.083732  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:35.083828  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:35.116172  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:35.116200  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:33.928476  282174 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:43:33.933279  282174 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:43:33.933325  282174 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:43:34.428927  282174 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 12:43:34.433179  282174 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 12:43:34.434171  282174 api_server.go:141] control plane version: v1.34.1
	I1020 12:43:34.434194  282174 api_server.go:131] duration metric: took 1.006189688s to wait for apiserver health ...
	I1020 12:43:34.434202  282174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:43:34.437536  282174 system_pods.go:59] 8 kube-system pods found
	I1020 12:43:34.437566  282174 system_pods.go:61] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:34.437574  282174 system_pods.go:61] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:43:34.437580  282174 system_pods.go:61] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:43:34.437587  282174 system_pods.go:61] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:43:34.437595  282174 system_pods.go:61] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:43:34.437605  282174 system_pods.go:61] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:43:34.437613  282174 system_pods.go:61] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:43:34.437619  282174 system_pods.go:61] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Running
	I1020 12:43:34.437631  282174 system_pods.go:74] duration metric: took 3.422035ms to wait for pod list to return data ...
	I1020 12:43:34.437641  282174 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:43:34.440279  282174 default_sa.go:45] found service account: "default"
	I1020 12:43:34.440305  282174 default_sa.go:55] duration metric: took 2.656969ms for default service account to be created ...
	I1020 12:43:34.440316  282174 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:43:34.443019  282174 system_pods.go:86] 8 kube-system pods found
	I1020 12:43:34.443051  282174 system_pods.go:89] "coredns-66bc5c9577-vpzk5" [7422dd44-eb83-44f9-8711-41a74794dfed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:43:34.443066  282174 system_pods.go:89] "etcd-embed-certs-907116" [c9b10ac7-f33b-4904-9bd4-e4b45fdbecc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:43:34.443074  282174 system_pods.go:89] "kindnet-24g82" [86b2fc3f-2d40-4a2d-9068-75b0a952b958] Running
	I1020 12:43:34.443084  282174 system_pods.go:89] "kube-apiserver-embed-certs-907116" [bf5edeb3-2d81-45bc-87e2-6ab9a0e5640f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:43:34.443095  282174 system_pods.go:89] "kube-controller-manager-embed-certs-907116" [7897ad50-0673-4bd6-9cea-65cb1a82d2c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:43:34.443106  282174 system_pods.go:89] "kube-proxy-s2xbv" [f01f5d2c-f20c-42ea-a933-b6d15ea40244] Running
	I1020 12:43:34.443126  282174 system_pods.go:89] "kube-scheduler-embed-certs-907116" [f0bde9ca-242e-4259-91fc-73b86f6b9066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:43:34.443136  282174 system_pods.go:89] "storage-provisioner" [83ece3ef-33e2-4353-9230-6bdd8c7320c0] Running
	I1020 12:43:34.443145  282174 system_pods.go:126] duration metric: took 2.82209ms to wait for k8s-apps to be running ...
	I1020 12:43:34.443155  282174 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:43:34.443208  282174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:34.456643  282174 system_svc.go:56] duration metric: took 13.479504ms WaitForService to wait for kubelet
	I1020 12:43:34.456671  282174 kubeadm.go:586] duration metric: took 3.425579918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:43:34.456692  282174 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:43:34.459787  282174 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:43:34.459828  282174 node_conditions.go:123] node cpu capacity is 8
	I1020 12:43:34.459844  282174 node_conditions.go:105] duration metric: took 3.146734ms to run NodePressure ...
	I1020 12:43:34.459856  282174 start.go:241] waiting for startup goroutines ...
	I1020 12:43:34.459864  282174 start.go:246] waiting for cluster config update ...
	I1020 12:43:34.459874  282174 start.go:255] writing updated cluster config ...
	I1020 12:43:34.460125  282174 ssh_runner.go:195] Run: rm -f paused
	I1020 12:43:34.464153  282174 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:43:34.467524  282174 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vpzk5" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:43:36.473348  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	W1020 12:43:38.474481  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	I1020 12:43:37.717844  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:37.718275  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:37.718329  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:37.718393  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:37.754323  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:37.754370  236655 cri.go:89] found id: ""
	I1020 12:43:37.754381  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:37.754449  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:37.759972  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:37.760041  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:37.795405  236655 cri.go:89] found id: ""
	I1020 12:43:37.795434  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.795443  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:37.795450  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:37.795508  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1020 12:43:37.829975  236655 cri.go:89] found id: ""
	I1020 12:43:37.830011  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.830022  236655 logs.go:284] No container was found matching "coredns"
	I1020 12:43:37.830030  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1020 12:43:37.830093  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1020 12:43:37.871099  236655 cri.go:89] found id: "a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:37.871127  236655 cri.go:89] found id: ""
	I1020 12:43:37.871137  236655 logs.go:282] 1 containers: [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02]
	I1020 12:43:37.871196  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:37.876285  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1020 12:43:37.876356  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1020 12:43:37.908690  236655 cri.go:89] found id: ""
	I1020 12:43:37.908718  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.908729  236655 logs.go:284] No container was found matching "kube-proxy"
	I1020 12:43:37.908737  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1020 12:43:37.908828  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1020 12:43:37.945866  236655 cri.go:89] found id: "08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:37.945896  236655 cri.go:89] found id: ""
	I1020 12:43:37.945906  236655 logs.go:282] 1 containers: [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a]
	I1020 12:43:37.945965  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:37.951747  236655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1020 12:43:37.951885  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1020 12:43:37.991796  236655 cri.go:89] found id: ""
	I1020 12:43:37.991826  236655 logs.go:282] 0 containers: []
	W1020 12:43:37.991836  236655 logs.go:284] No container was found matching "kindnet"
	I1020 12:43:37.991843  236655 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1020 12:43:37.991904  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1020 12:43:38.029214  236655 cri.go:89] found id: ""
	I1020 12:43:38.029241  236655 logs.go:282] 0 containers: []
	W1020 12:43:38.029253  236655 logs.go:284] No container was found matching "storage-provisioner"
	I1020 12:43:38.029264  236655 logs.go:123] Gathering logs for kubelet ...
	I1020 12:43:38.029282  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1020 12:43:38.167135  236655 logs.go:123] Gathering logs for dmesg ...
	I1020 12:43:38.167165  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1020 12:43:38.185949  236655 logs.go:123] Gathering logs for describe nodes ...
	I1020 12:43:38.185979  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1020 12:43:38.260207  236655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1020 12:43:38.260233  236655 logs.go:123] Gathering logs for kube-apiserver [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25] ...
	I1020 12:43:38.260248  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:38.303320  236655 logs.go:123] Gathering logs for kube-scheduler [a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02] ...
	I1020 12:43:38.303350  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a7c715b89d13c37a15a0d7cb0261c645d63f048978878e3d3d1d7dc35304af02"
	I1020 12:43:38.388952  236655 logs.go:123] Gathering logs for kube-controller-manager [08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a] ...
	I1020 12:43:38.389034  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 08b1ddb8c6e8c26a39638a2a15f96dacd16afb5b3e590b7d2df0c9c9d8890a3a"
	I1020 12:43:38.426168  236655 logs.go:123] Gathering logs for CRI-O ...
	I1020 12:43:38.426196  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1020 12:43:38.511979  236655 logs.go:123] Gathering logs for container status ...
	I1020 12:43:38.512017  236655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1020 12:43:41.054853  236655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1020 12:43:41.055292  236655 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1020 12:43:41.055356  236655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1020 12:43:41.055408  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1020 12:43:41.093388  236655 cri.go:89] found id: "40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25"
	I1020 12:43:41.093472  236655 cri.go:89] found id: ""
	I1020 12:43:41.093483  236655 logs.go:282] 1 containers: [40bf72eab0efa37c8f6b5920fbe8f6d45cb17ba8b3f254e1b320ce071f8a2a25]
	I1020 12:43:41.093555  236655 ssh_runner.go:195] Run: which crictl
	I1020 12:43:41.100624  236655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1020 12:43:41.100742  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1020 12:43:41.140030  236655 cri.go:89] found id: ""
	I1020 12:43:41.140056  236655 logs.go:282] 0 containers: []
	W1020 12:43:41.140067  236655 logs.go:284] No container was found matching "etcd"
	I1020 12:43:41.140079  236655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1020 12:43:41.140138  236655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	
	
	==> CRI-O <==
	Oct 20 12:43:15 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:15.501870119Z" level=info msg="Started container" PID=1752 containerID=d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper id=870a4723-c4b7-49a8-b368-c05253c4a1e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71953c6975d23ba4f63b79aa41391cb750fc2ad4ca0ed33bb8b463268684827e
	Oct 20 12:43:15 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:15.578315325Z" level=info msg="Removing container: caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502" id=86d59558-a713-46c2-8caa-60631cb9cd2f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:15 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:15.592136835Z" level=info msg="Removed container caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=86d59558-a713-46c2-8caa-60631cb9cd2f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.602263574Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c576dcf-366b-49e7-9224-d6b6a81b4475 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.603191988Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01b51627-b656-4e5b-8379-834eeceb8309 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.60465722Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e3899b85-c46d-4ece-9b5d-5a9242bd3cb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.604811053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.60937039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.609514365Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/405c71fa53bba4d556ef1ad89650177a59713fc22ddff4754f7beba956854715/merged/etc/passwd: no such file or directory"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.609539177Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/405c71fa53bba4d556ef1ad89650177a59713fc22ddff4754f7beba956854715/merged/etc/group: no such file or directory"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.60986244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.633509834Z" level=info msg="Created container fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6: kube-system/storage-provisioner/storage-provisioner" id=e3899b85-c46d-4ece-9b5d-5a9242bd3cb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.63419485Z" level=info msg="Starting container: fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6" id=55132896-48c3-4cf0-9c38-2aa75662ea1a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:43:23 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:23.636642534Z" level=info msg="Started container" PID=1766 containerID=fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6 description=kube-system/storage-provisioner/storage-provisioner id=55132896-48c3-4cf0-9c38-2aa75662ea1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8818ed6ff5c0bb8393917e743e929ca94648618e7e6d01d3d0e351f3731115e9
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.456065688Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0db072bd-73cb-4b0d-a0fe-b7f5706f4d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.45703181Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=57b848bc-9edf-44d6-9c3b-ee71a1fbdd44 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.458069735Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=e089ea13-eba3-4e82-9987-bd092806a6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.458204274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.463655129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.464116041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.501352554Z" level=info msg="Created container 52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=e089ea13-eba3-4e82-9987-bd092806a6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.502067139Z" level=info msg="Starting container: 52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487" id=2316e1e0-c22a-461c-8685-50416423e397 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.504045441Z" level=info msg="Started container" PID=1802 containerID=52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper id=2316e1e0-c22a-461c-8685-50416423e397 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71953c6975d23ba4f63b79aa41391cb750fc2ad4ca0ed33bb8b463268684827e
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.642318223Z" level=info msg="Removing container: d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b" id=55697984-3d28-4ee6-95fe-8a975e14a035 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:37 default-k8s-diff-port-874012 crio[566]: time="2025-10-20T12:43:37.652043282Z" level=info msg="Removed container d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769/dashboard-metrics-scraper" id=55697984-3d28-4ee6-95fe-8a975e14a035 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	52f945f0582ca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   71953c6975d23       dashboard-metrics-scraper-6ffb444bf9-sc769             kubernetes-dashboard
	fe371429b8834       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   8818ed6ff5c0b       storage-provisioner                                    kube-system
	997f5fb70cf17       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   4417a834810bd       kubernetes-dashboard-855c9754f9-p7w4b                  kubernetes-dashboard
	e03f2f95e6c14       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   2f74c867b0162       kindnet-jrv62                                          kube-system
	96ed2fb71faec       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   02fd34db040be       kube-proxy-bbw6k                                       kube-system
	07c72d8489055       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   332c5a97a043a       busybox                                                default
	7866a55261bf6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   e56c4790fd1b5       coredns-66bc5c9577-vd5sd                               kube-system
	949fa188399d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   8818ed6ff5c0b       storage-provisioner                                    kube-system
	950cf2bcf663d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   82195466f2f6d       kube-apiserver-default-k8s-diff-port-874012            kube-system
	361bbce2ef1da       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   40ee20c300d3a       kube-scheduler-default-k8s-diff-port-874012            kube-system
	4701f0f003c88       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   9eb180e70ba13       kube-controller-manager-default-k8s-diff-port-874012   kube-system
	7c78acc071dce       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   44bb79ac3b98d       etcd-default-k8s-diff-port-874012                      kube-system
	
	
	==> coredns [7866a55261bf64a5c5e00ff9934f5375450ec837c58b9e9ea122dbc5064839b2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48076 - 16981 "HINFO IN 3541001985407855094.5017519029671984671. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01334755s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-874012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-874012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=default-k8s-diff-port-874012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_41_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-874012
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:43:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:41:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:43:22 +0000   Mon, 20 Oct 2025 12:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-874012
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2780d33f-1af5-4f46-b321-ab4699252d20
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-vd5sd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-874012                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-jrv62                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-874012             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-874012    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-bbw6k                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-874012             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sc769              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p7w4b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-874012 event: Registered Node default-k8s-diff-port-874012 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-874012 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-874012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-874012 event: Registered Node default-k8s-diff-port-874012 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [7c78acc071dce4799d081c9cd84fb7f3990161652fd814c617b6d088840d020a] <==
	{"level":"warn","ts":"2025-10-20T12:42:51.753981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.802587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-20T12:42:51.754521Z","caller":"traceutil/trace.go:172","msg":"trace[1407051675] transaction","detail":"{read_only:false; number_of_response:0; response_revision:446; }","duration":"269.639641ms","start":"2025-10-20T12:42:51.484874Z","end":"2025-10-20T12:42:51.754514Z","steps":["trace[1407051675] 'process raft request'  (duration: 269.304054ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:51.754543Z","caller":"traceutil/trace.go:172","msg":"trace[1601164554] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:445; }","duration":"271.379532ms","start":"2025-10-20T12:42:51.483155Z","end":"2025-10-20T12:42:51.754535Z","steps":["trace[1601164554] 'agreement among raft nodes before linearized reading'  (duration: 270.743126ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:51.754338Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-20T12:42:51.451425Z","time spent":"302.670429ms","remote":"127.0.0.1:42346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5087,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-874012\" mod_revision:391 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-874012\" value_size:5009 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-874012\" > >"}
	{"level":"warn","ts":"2025-10-20T12:42:51.754365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.275422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-10-20T12:42:51.754677Z","caller":"traceutil/trace.go:172","msg":"trace[1586996241] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:446; }","duration":"271.581923ms","start":"2025-10-20T12:42:51.483077Z","end":"2025-10-20T12:42:51.754659Z","steps":["trace[1586996241] 'agreement among raft nodes before linearized reading'  (duration: 270.946496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:42:51.931146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.09488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:4341"}
	{"level":"info","ts":"2025-10-20T12:42:51.931222Z","caller":"traceutil/trace.go:172","msg":"trace[2048949999] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:446; }","duration":"153.184365ms","start":"2025-10-20T12:42:51.778017Z","end":"2025-10-20T12:42:51.931201Z","steps":["trace[2048949999] 'agreement among raft nodes before linearized reading'  (duration: 87.742915ms)","trace[2048949999] 'range keys from in-memory index tree'  (duration: 65.229624ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:51.931167Z","caller":"traceutil/trace.go:172","msg":"trace[1244085729] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"168.599329ms","start":"2025-10-20T12:42:51.762553Z","end":"2025-10-20T12:42:51.931153Z","steps":["trace[1244085729] 'process raft request'  (duration: 103.312908ms)","trace[1244085729] 'compare'  (duration: 65.164791ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:51.932001Z","caller":"traceutil/trace.go:172","msg":"trace[112224890] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"163.05423ms","start":"2025-10-20T12:42:51.768931Z","end":"2025-10-20T12:42:51.931985Z","steps":["trace[112224890] 'process raft request'  (duration: 162.894389ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:52.065586Z","caller":"traceutil/trace.go:172","msg":"trace[1994445504] linearizableReadLoop","detail":"{readStateIndex:476; appliedIndex:476; }","duration":"121.137769ms","start":"2025-10-20T12:42:51.944421Z","end":"2025-10-20T12:42:52.065559Z","steps":["trace[1994445504] 'read index received'  (duration: 121.126343ms)","trace[1994445504] 'applied index is now lower than readState.Index'  (duration: 9.136µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.105524Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.063935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:42:52.105609Z","caller":"traceutil/trace.go:172","msg":"trace[614722426] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:448; }","duration":"161.169839ms","start":"2025-10-20T12:42:51.944412Z","end":"2025-10-20T12:42:52.105582Z","steps":["trace[614722426] 'agreement among raft nodes before linearized reading'  (duration: 121.222771ms)","trace[614722426] 'range keys from in-memory index tree'  (duration: 39.801152ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.105626Z","caller":"traceutil/trace.go:172","msg":"trace[1725100767] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"163.362451ms","start":"2025-10-20T12:42:51.942247Z","end":"2025-10-20T12:42:52.105610Z","steps":["trace[1725100767] 'process raft request'  (duration: 123.397811ms)","trace[1725100767] 'compare'  (duration: 39.833854ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.106367Z","caller":"traceutil/trace.go:172","msg":"trace[47872528] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"163.566151ms","start":"2025-10-20T12:42:51.942789Z","end":"2025-10-20T12:42:52.106355Z","steps":["trace[47872528] 'process raft request'  (duration: 163.445347ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:52.429127Z","caller":"traceutil/trace.go:172","msg":"trace[799609906] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:484; }","duration":"124.442116ms","start":"2025-10-20T12:42:52.304660Z","end":"2025-10-20T12:42:52.429102Z","steps":["trace[799609906] 'read index received'  (duration: 124.43235ms)","trace[799609906] 'applied index is now lower than readState.Index'  (duration: 7.523µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.603150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.466822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"warn","ts":"2025-10-20T12:42:52.603190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.932061ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789458942085856 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" value_size:867 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-20T12:42:52.603216Z","caller":"traceutil/trace.go:172","msg":"trace[1954402182] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:456; }","duration":"298.544054ms","start":"2025-10-20T12:42:52.304655Z","end":"2025-10-20T12:42:52.603199Z","steps":["trace[1954402182] 'agreement among raft nodes before linearized reading'  (duration: 124.527287ms)","trace[1954402182] 'range keys from in-memory index tree'  (duration: 173.834751ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.603344Z","caller":"traceutil/trace.go:172","msg":"trace[1103778544] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"298.780613ms","start":"2025-10-20T12:42:52.304543Z","end":"2025-10-20T12:42:52.603323Z","steps":["trace[1103778544] 'process raft request'  (duration: 124.661622ms)","trace[1103778544] 'compare'  (duration: 173.779097ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.603442Z","caller":"traceutil/trace.go:172","msg":"trace[395698756] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"289.739802ms","start":"2025-10-20T12:42:52.313687Z","end":"2025-10-20T12:42:52.603426Z","steps":["trace[395698756] 'process raft request'  (duration: 289.582863ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:42:52.893670Z","caller":"traceutil/trace.go:172","msg":"trace[493070206] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:490; }","duration":"139.398571ms","start":"2025-10-20T12:42:52.754247Z","end":"2025-10-20T12:42:52.893645Z","steps":["trace[493070206] 'read index received'  (duration: 139.387849ms)","trace[493070206] 'applied index is now lower than readState.Index'  (duration: 9.428µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T12:42:52.987261Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.984121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient\" limit:1 ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2025-10-20T12:42:52.987347Z","caller":"traceutil/trace.go:172","msg":"trace[1103536632] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient; range_end:; response_count:1; response_revision:462; }","duration":"233.087919ms","start":"2025-10-20T12:42:52.754243Z","end":"2025-10-20T12:42:52.987331Z","steps":["trace[1103536632] 'agreement among raft nodes before linearized reading'  (duration: 139.48403ms)","trace[1103536632] 'range keys from in-memory index tree'  (duration: 93.370956ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:42:52.987636Z","caller":"traceutil/trace.go:172","msg":"trace[962722829] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"233.721057ms","start":"2025-10-20T12:42:52.753900Z","end":"2025-10-20T12:42:52.987621Z","steps":["trace[962722829] 'process raft request'  (duration: 139.751101ms)","trace[962722829] 'compare'  (duration: 93.858933ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:43:44 up  1:26,  0 user,  load average: 4.21, 3.52, 2.30
	Linux default-k8s-diff-port-874012 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e03f2f95e6c14702b90f8c7799cdb5513504049e5e68dc0d01aace1a70f8e115] <==
	I1020 12:42:53.087584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:42:53.088207       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1020 12:42:53.088394       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:42:53.088414       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:42:53.088439       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:42:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:42:53.291533       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:42:53.291560       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:42:53.291591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:42:53.291918       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:42:53.783568       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:42:53.783607       1 metrics.go:72] Registering metrics
	I1020 12:42:53.783673       1 controller.go:711] "Syncing nftables rules"
	I1020 12:43:03.290966       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:03.291041       1 main.go:301] handling current node
	I1020 12:43:13.290928       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:13.290974       1 main.go:301] handling current node
	I1020 12:43:23.290957       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:23.290985       1 main.go:301] handling current node
	I1020 12:43:33.296871       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:33.296903       1 main.go:301] handling current node
	I1020 12:43:43.293942       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1020 12:43:43.293985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [950cf2bcf663da8ddc81ce889407cc48e3d12e5e1bd9be508b2b13a09017120c] <==
	I1020 12:42:51.036738       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:42:51.036753       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 12:42:51.037080       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:42:51.037991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:42:51.039304       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 12:42:51.041189       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:42:51.041464       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:42:51.041272       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 12:42:51.041593       1 policy_source.go:240] refreshing policies
	I1020 12:42:51.045580       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:42:51.048097       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:42:51.061334       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:42:51.074238       1 cache.go:39] Caches are synced for autoregister controller
	E1020 12:42:51.221704       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 12:42:51.378356       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:42:51.754954       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:42:51.941633       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:42:51.943374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:42:52.197477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:42:52.249095       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:42:53.010446       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.0.9"}
	I1020 12:42:53.023527       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.254.76"}
	I1020 12:42:55.359000       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:42:55.760797       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:42:55.909890       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4701f0f003c887f114d5da2a88fc8b6767f57ea38df31b2ec658e6f9e2ca07df] <==
	I1020 12:42:55.311910       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 12:42:55.311918       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 12:42:55.312981       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:55.319344       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 12:42:55.321622       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 12:42:55.325938       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 12:42:55.355514       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 12:42:55.355542       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:42:55.355555       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:42:55.355514       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:42:55.355743       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:42:55.355748       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:42:55.355895       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:42:55.355937       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:42:55.355961       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:42:55.357332       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 12:42:55.360047       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:42:55.364873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:42:55.367129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:42:55.367149       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:42:55.367160       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:42:55.369303       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 12:42:55.371490       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 12:42:55.373759       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:42:55.384109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [96ed2fb71faeca4bae41804a971903dfe647f4945e3ac5a8e2c2c362359f0919] <==
	I1020 12:42:52.949037       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:42:53.006059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:42:53.106159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:42:53.106197       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1020 12:42:53.106295       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:42:53.137425       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:42:53.137496       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:42:53.145496       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:42:53.146062       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:42:53.146115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:53.147690       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:42:53.147761       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:42:53.148086       1 config.go:200] "Starting service config controller"
	I1020 12:42:53.148202       1 config.go:309] "Starting node config controller"
	I1020 12:42:53.148352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:42:53.148233       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:42:53.147764       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:42:53.148505       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:42:53.248555       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:42:53.248578       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:42:53.248608       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:42:53.249709       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [361bbce2ef1dab79033c19296471736ded91254dc81373034fb69f4e8ab8a98c] <==
	I1020 12:42:49.940103       1 serving.go:386] Generated self-signed cert in-memory
	I1020 12:42:51.003016       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:42:51.003041       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:42:51.007642       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 12:42:51.007653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:51.007651       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:42:51.007689       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:42:51.007700       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:42:51.007691       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 12:42:51.007929       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:42:51.007949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:42:51.107955       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 12:42:51.107962       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:42:51.108193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:42:56 default-k8s-diff-port-874012 kubelet[719]: I1020 12:42:56.109760     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb7l\" (UniqueName: \"kubernetes.io/projected/5bed4e77-d51d-4392-adf0-69a3e5538205-kube-api-access-pqb7l\") pod \"kubernetes-dashboard-855c9754f9-p7w4b\" (UID: \"5bed4e77-d51d-4392-adf0-69a3e5538205\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p7w4b"
	Oct 20 12:42:59 default-k8s-diff-port-874012 kubelet[719]: I1020 12:42:59.525992     719 scope.go:117] "RemoveContainer" containerID="048972c342cb6435492b54fcd19cd646a2fa14d3f0f885fa877001293b3efa62"
	Oct 20 12:43:00 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:00.531317     719 scope.go:117] "RemoveContainer" containerID="048972c342cb6435492b54fcd19cd646a2fa14d3f0f885fa877001293b3efa62"
	Oct 20 12:43:00 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:00.531660     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:00 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:00.531874     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:01 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:01.533994     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:01 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:01.534207     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:02 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:02.538652     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:02 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:02.538924     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:02 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:02.550710     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p7w4b" podStartSLOduration=2.152596274 podStartE2EDuration="7.550688417s" podCreationTimestamp="2025-10-20 12:42:55 +0000 UTC" firstStartedPulling="2025-10-20 12:42:56.321988802 +0000 UTC m=+7.976333795" lastFinishedPulling="2025-10-20 12:43:01.72008096 +0000 UTC m=+13.374425938" observedRunningTime="2025-10-20 12:43:02.550457891 +0000 UTC m=+14.204802890" watchObservedRunningTime="2025-10-20 12:43:02.550688417 +0000 UTC m=+14.205033416"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:15.455312     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:15.575766     719 scope.go:117] "RemoveContainer" containerID="caf2c7bb2f2a4c72d02f8a72c1330bfa26302231744c5ca939ee22d395442502"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:15.576238     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:15 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:15.576460     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:21 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:21.798143     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:21 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:21.798349     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:23 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:23.601858     719 scope.go:117] "RemoveContainer" containerID="949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:37.455587     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:37.639445     719 scope.go:117] "RemoveContainer" containerID="d2caaae29c0b7a1e806f2993cde038dda53df660c0a9fbf6d15894c0f96cb31b"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: I1020 12:43:37.639756     719 scope.go:117] "RemoveContainer" containerID="52f945f0582ca2066d66902a718c474fe74c72c28edc6b6760d429767fb96487"
	Oct 20 12:43:37 default-k8s-diff-port-874012 kubelet[719]: E1020 12:43:37.639984     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sc769_kubernetes-dashboard(b42abc13-1c7a-4eae-94c0-853accd3f9a3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sc769" podUID="b42abc13-1c7a-4eae-94c0-853accd3f9a3"
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:43:39 default-k8s-diff-port-874012 systemd[1]: kubelet.service: Consumed 1.806s CPU time.
	
	
	==> kubernetes-dashboard [997f5fb70cf17401f9f118f22b72542195a6fa932ca73033e3cb05b2879ccce7] <==
	2025/10/20 12:43:01 Starting overwatch
	2025/10/20 12:43:01 Using namespace: kubernetes-dashboard
	2025/10/20 12:43:01 Using in-cluster config to connect to apiserver
	2025/10/20 12:43:01 Using secret token for csrf signing
	2025/10/20 12:43:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:43:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:43:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:43:01 Generating JWE encryption key
	2025/10/20 12:43:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:43:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:43:01 Initializing JWE encryption key from synchronized object
	2025/10/20 12:43:01 Creating in-cluster Sidecar client
	2025/10/20 12:43:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:43:01 Serving insecurely on HTTP port: 9090
	2025/10/20 12:43:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [949fa188399d88fb36148cd3e18aead87c4e1915aac3b52977a50c822f49bd7f] <==
	I1020 12:42:52.728896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:43:22.733152       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe371429b88344749ac37a938a9d1bdc124657f68155bff696e9d16c03ceb4e6] <==
	I1020 12:43:23.649124       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:43:23.657579       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:43:23.657618       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:43:23.659847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:27.115839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:31.377873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:34.977160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:38.031198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:41.053878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:41.060130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:43:41.060297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:43:41.060443       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"992be66e-ad31-4768-ae4d-5fe58274f9ef", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-874012_f053c512-15c3-436e-bb0c-1f95987eafed became leader
	I1020 12:43:41.060551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-874012_f053c512-15c3-436e-bb0c-1f95987eafed!
	W1020 12:43:41.063443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:41.070287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:43:41.161511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-874012_f053c512-15c3-436e-bb0c-1f95987eafed!
	W1020 12:43:43.073716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:43:43.080109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012: exit status 2 (345.083321ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-907116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-907116 --alsologtostderr -v=1: exit status 80 (1.825358012s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-907116 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:44:25.656584  299475 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:44:25.656890  299475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:44:25.656902  299475 out.go:374] Setting ErrFile to fd 2...
	I1020 12:44:25.656907  299475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:44:25.657095  299475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:44:25.657304  299475 out.go:368] Setting JSON to false
	I1020 12:44:25.657342  299475 mustload.go:65] Loading cluster: embed-certs-907116
	I1020 12:44:25.657641  299475 config.go:182] Loaded profile config "embed-certs-907116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:25.658049  299475 cli_runner.go:164] Run: docker container inspect embed-certs-907116 --format={{.State.Status}}
	I1020 12:44:25.677613  299475 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:44:25.677992  299475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:44:25.740664  299475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-20 12:44:25.730641494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:44:25.741277  299475 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-907116 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 12:44:25.744313  299475 out.go:179] * Pausing node embed-certs-907116 ... 
	I1020 12:44:25.745575  299475 host.go:66] Checking if "embed-certs-907116" exists ...
	I1020 12:44:25.745885  299475 ssh_runner.go:195] Run: systemctl --version
	I1020 12:44:25.745922  299475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-907116
	I1020 12:44:25.763049  299475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/embed-certs-907116/id_rsa Username:docker}
	I1020 12:44:25.863684  299475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:44:25.890397  299475 pause.go:52] kubelet running: true
	I1020 12:44:25.890465  299475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:44:26.058824  299475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:44:26.058908  299475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:44:26.130154  299475 cri.go:89] found id: "8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95"
	I1020 12:44:26.130180  299475 cri.go:89] found id: "15ef78b953e819a004b34f819cf429a261c9139ff8a41f2f50eede4db5a65bde"
	I1020 12:44:26.130185  299475 cri.go:89] found id: "43d66af915825ae45fb963115486c3a36542c4a768ce4d5fed2ff9bc19ed78cc"
	I1020 12:44:26.130203  299475 cri.go:89] found id: "9f6137e79a6af824320fb4d2c61c014d11280ad3d72aaf8477198b8a808bfe57"
	I1020 12:44:26.130208  299475 cri.go:89] found id: "e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d"
	I1020 12:44:26.130212  299475 cri.go:89] found id: "71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777"
	I1020 12:44:26.130216  299475 cri.go:89] found id: "b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e"
	I1020 12:44:26.130220  299475 cri.go:89] found id: "22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6"
	I1020 12:44:26.130224  299475 cri.go:89] found id: "c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01"
	I1020 12:44:26.130232  299475 cri.go:89] found id: "d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277"
	I1020 12:44:26.130239  299475 cri.go:89] found id: "d1e0d8719fc2a02f1a574fface75a559d0703a7f0c071f3f9e982fe3484fee6e"
	I1020 12:44:26.130244  299475 cri.go:89] found id: ""
	I1020 12:44:26.130300  299475 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:44:26.143124  299475 retry.go:31] will retry after 239.718757ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:44:26Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:44:26.383647  299475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:44:26.396971  299475 pause.go:52] kubelet running: false
	I1020 12:44:26.397030  299475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:44:26.545140  299475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:44:26.545234  299475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:44:26.615902  299475 cri.go:89] found id: "8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95"
	I1020 12:44:26.615922  299475 cri.go:89] found id: "15ef78b953e819a004b34f819cf429a261c9139ff8a41f2f50eede4db5a65bde"
	I1020 12:44:26.615926  299475 cri.go:89] found id: "43d66af915825ae45fb963115486c3a36542c4a768ce4d5fed2ff9bc19ed78cc"
	I1020 12:44:26.615929  299475 cri.go:89] found id: "9f6137e79a6af824320fb4d2c61c014d11280ad3d72aaf8477198b8a808bfe57"
	I1020 12:44:26.615931  299475 cri.go:89] found id: "e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d"
	I1020 12:44:26.615934  299475 cri.go:89] found id: "71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777"
	I1020 12:44:26.615937  299475 cri.go:89] found id: "b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e"
	I1020 12:44:26.615940  299475 cri.go:89] found id: "22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6"
	I1020 12:44:26.615942  299475 cri.go:89] found id: "c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01"
	I1020 12:44:26.615947  299475 cri.go:89] found id: "d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277"
	I1020 12:44:26.615950  299475 cri.go:89] found id: "d1e0d8719fc2a02f1a574fface75a559d0703a7f0c071f3f9e982fe3484fee6e"
	I1020 12:44:26.615952  299475 cri.go:89] found id: ""
	I1020 12:44:26.615988  299475 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:44:26.628183  299475 retry.go:31] will retry after 557.043423ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:44:26Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:44:27.185925  299475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:44:27.199882  299475 pause.go:52] kubelet running: false
	I1020 12:44:27.199945  299475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 12:44:27.345701  299475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 12:44:27.345813  299475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 12:44:27.413565  299475 cri.go:89] found id: "8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95"
	I1020 12:44:27.413585  299475 cri.go:89] found id: "15ef78b953e819a004b34f819cf429a261c9139ff8a41f2f50eede4db5a65bde"
	I1020 12:44:27.413588  299475 cri.go:89] found id: "43d66af915825ae45fb963115486c3a36542c4a768ce4d5fed2ff9bc19ed78cc"
	I1020 12:44:27.413592  299475 cri.go:89] found id: "9f6137e79a6af824320fb4d2c61c014d11280ad3d72aaf8477198b8a808bfe57"
	I1020 12:44:27.413595  299475 cri.go:89] found id: "e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d"
	I1020 12:44:27.413599  299475 cri.go:89] found id: "71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777"
	I1020 12:44:27.413601  299475 cri.go:89] found id: "b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e"
	I1020 12:44:27.413604  299475 cri.go:89] found id: "22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6"
	I1020 12:44:27.413606  299475 cri.go:89] found id: "c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01"
	I1020 12:44:27.413611  299475 cri.go:89] found id: "d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277"
	I1020 12:44:27.413614  299475 cri.go:89] found id: "d1e0d8719fc2a02f1a574fface75a559d0703a7f0c071f3f9e982fe3484fee6e"
	I1020 12:44:27.413616  299475 cri.go:89] found id: ""
	I1020 12:44:27.413687  299475 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:44:27.427974  299475 out.go:203] 
	W1020 12:44:27.429385  299475 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:44:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:44:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:44:27.429400  299475 out.go:285] * 
	* 
	W1020 12:44:27.433723  299475 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:44:27.435049  299475 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-907116 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-907116
helpers_test.go:243: (dbg) docker inspect embed-certs-907116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff",
	        "Created": "2025-10-20T12:42:20.232246368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:43:23.947248853Z",
	            "FinishedAt": "2025-10-20T12:43:23.111913642Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/hosts",
	        "LogPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff-json.log",
	        "Name": "/embed-certs-907116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-907116:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-907116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff",
	                "LowerDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-907116",
	                "Source": "/var/lib/docker/volumes/embed-certs-907116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-907116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-907116",
	                "name.minikube.sigs.k8s.io": "embed-certs-907116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4dd377a3e78f245abb9abebbd1c02e79c4568c6c7f2ed56ec280372438a0b231",
	            "SandboxKey": "/var/run/docker/netns/4dd377a3e78f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-907116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:b2:41:43:ff:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e327fc0cc35f5e99ec36d310a3ce8c7214de7f81deb736225deef68fe8ea58b",
	                    "EndpointID": "7de7302fe808da40d714302be8ae3b6fffeae8e89301a3c204c849e112a0940b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-907116",
	                        "dde9a162828e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116: exit status 2 (318.123845ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-907116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-907116 logs -n 25: (1.153124871s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-312375 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl cat docker --no-pager                                                                                      │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /etc/docker/daemon.json                                                                                          │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo docker system info                                                                                                   │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cri-dockerd --version                                                                                                │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl cat containerd --no-pager                                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /etc/containerd/config.toml                                                                                      │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo containerd config dump                                                                                               │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl cat crio --no-pager                                                                                        │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo crio config                                                                                                          │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ delete  │ -p auto-312375                                                                                                                           │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ start   │ -p calico-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio   │ calico-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │                     │
	│ image   │ embed-certs-907116 image list --format=json                                                                                              │ embed-certs-907116        │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │ 20 Oct 25 12:44 UTC │
	│ pause   │ -p embed-certs-907116 --alsologtostderr -v=1                                                                                             │ embed-certs-907116        │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:44:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:44:05.338558  296590 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:44:05.338678  296590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:44:05.338688  296590 out.go:374] Setting ErrFile to fd 2...
	I1020 12:44:05.338693  296590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:44:05.338931  296590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:44:05.339467  296590 out.go:368] Setting JSON to false
	I1020 12:44:05.340944  296590 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5194,"bootTime":1760959051,"procs":386,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:44:05.341066  296590 start.go:141] virtualization: kvm guest
	I1020 12:44:05.342934  296590 out.go:179] * [kubernetes-upgrade-196539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:44:05.344639  296590 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:44:05.344683  296590 notify.go:220] Checking for updates...
	I1020 12:44:05.347205  296590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:44:05.348843  296590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:44:05.350259  296590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:44:05.351478  296590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:44:05.352682  296590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:44:05.354462  296590 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:05.355188  296590 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:44:05.386914  296590 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:44:05.387044  296590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:44:05.456800  296590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-20 12:44:05.444697997 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:44:05.456900  296590 docker.go:318] overlay module found
	I1020 12:44:05.460940  296590 out.go:179] * Using the docker driver based on existing profile
	I1020 12:44:05.462355  296590 start.go:305] selected driver: docker
	I1020 12:44:05.462374  296590 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:44:05.462464  296590 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:44:05.463033  296590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:44:05.529521  296590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-20 12:44:05.519757381 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:44:05.529835  296590 cni.go:84] Creating CNI manager for ""
	I1020 12:44:05.529894  296590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:44:05.529921  296590 start.go:349] cluster config:
	{Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:44:05.534344  296590 out.go:179] * Starting "kubernetes-upgrade-196539" primary control-plane node in "kubernetes-upgrade-196539" cluster
	I1020 12:44:05.535806  296590 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:44:05.537362  296590 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:44:05.538737  296590 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:44:05.538800  296590 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:44:05.538816  296590 cache.go:58] Caching tarball of preloaded images
	I1020 12:44:05.538814  296590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:44:05.538929  296590 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:44:05.538944  296590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:44:05.539092  296590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/config.json ...
	I1020 12:44:05.561814  296590 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:44:05.561835  296590 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:44:05.561855  296590 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:44:05.561881  296590 start.go:360] acquireMachinesLock for kubernetes-upgrade-196539: {Name:mk1d06f9572547ac12885711cb1bcf0c77e257ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:44:05.561947  296590 start.go:364] duration metric: took 43.144µs to acquireMachinesLock for "kubernetes-upgrade-196539"
	I1020 12:44:05.561968  296590 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:44:05.561978  296590 fix.go:54] fixHost starting: 
	I1020 12:44:05.562248  296590 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-196539 --format={{.State.Status}}
	I1020 12:44:05.581398  296590 fix.go:112] recreateIfNeeded on kubernetes-upgrade-196539: state=Running err=<nil>
	W1020 12:44:05.581429  296590 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:44:03.960585  294742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-312375:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.810642037s)
	I1020 12:44:03.960611  294742 kic.go:203] duration metric: took 4.810780457s to extract preloaded images to volume ...
	W1020 12:44:03.960694  294742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:44:03.960728  294742 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:44:03.960763  294742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:44:04.042732  294742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-312375 --name calico-312375 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-312375 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-312375 --network calico-312375 --ip 192.168.85.2 --volume calico-312375:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:44:04.392935  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Running}}
	I1020 12:44:04.417928  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:04.443348  294742 cli_runner.go:164] Run: docker exec calico-312375 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:44:04.537118  294742 oci.go:144] the created container "calico-312375" has a running status.
	I1020 12:44:04.537152  294742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa...
	I1020 12:44:04.823733  294742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:44:04.858892  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:04.889022  294742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:44:04.889045  294742 kic_runner.go:114] Args: [docker exec --privileged calico-312375 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:44:04.941923  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:04.970702  294742 machine.go:93] provisionDockerMachine start ...
	I1020 12:44:04.970827  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.000949  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.001216  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:05.001231  294742 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:44:05.164353  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-312375
	
	I1020 12:44:05.164428  294742 ubuntu.go:182] provisioning hostname "calico-312375"
	I1020 12:44:05.164493  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.188350  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.188657  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:05.188679  294742 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-312375 && echo "calico-312375" | sudo tee /etc/hostname
	I1020 12:44:05.359621  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-312375
	
	I1020 12:44:05.359719  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.384710  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.384985  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:05.385018  294742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-312375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-312375/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-312375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:44:05.537683  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:44:05.537717  294742 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:44:05.537757  294742 ubuntu.go:190] setting up certificates
	I1020 12:44:05.537789  294742 provision.go:84] configureAuth start
	I1020 12:44:05.537856  294742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-312375
	I1020 12:44:05.558564  294742 provision.go:143] copyHostCerts
	I1020 12:44:05.558637  294742 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:44:05.558648  294742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:44:05.558736  294742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:44:05.558900  294742 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:44:05.558914  294742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:44:05.558959  294742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:44:05.559075  294742 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:44:05.559088  294742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:44:05.559127  294742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:44:05.559222  294742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.calico-312375 san=[127.0.0.1 192.168.85.2 calico-312375 localhost minikube]
	I1020 12:44:05.892461  294742 provision.go:177] copyRemoteCerts
	I1020 12:44:05.892526  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:44:05.892569  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.914703  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.024698  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:44:06.049420  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 12:44:06.070184  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:44:06.089997  294742 provision.go:87] duration metric: took 552.185991ms to configureAuth
	I1020 12:44:06.090037  294742 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:44:06.090193  294742 config.go:182] Loaded profile config "calico-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:06.090300  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.111516  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:06.111760  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:06.111814  294742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:44:06.383022  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:44:06.383066  294742 machine.go:96] duration metric: took 1.412340954s to provisionDockerMachine
	I1020 12:44:06.383077  294742 client.go:171] duration metric: took 7.84715027s to LocalClient.Create
	I1020 12:44:06.383093  294742 start.go:167] duration metric: took 7.847213295s to libmachine.API.Create "calico-312375"
	I1020 12:44:06.383103  294742 start.go:293] postStartSetup for "calico-312375" (driver="docker")
	I1020 12:44:06.383119  294742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:44:06.383180  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:44:06.383223  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.402180  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.509633  294742 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:44:06.514277  294742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:44:06.514320  294742 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:44:06.514334  294742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:44:06.514394  294742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:44:06.514507  294742 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:44:06.514607  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:44:06.523396  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:44:06.546781  294742 start.go:296] duration metric: took 163.646503ms for postStartSetup
	I1020 12:44:06.547164  294742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-312375
	I1020 12:44:06.567436  294742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/config.json ...
	I1020 12:44:06.567741  294742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:44:06.567812  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.588023  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.687097  294742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:44:06.691969  294742 start.go:128] duration metric: took 8.158497731s to createHost
	I1020 12:44:06.692069  294742 start.go:83] releasing machines lock for "calico-312375", held for 8.158764349s
	I1020 12:44:06.692162  294742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-312375
	I1020 12:44:06.712400  294742 ssh_runner.go:195] Run: cat /version.json
	I1020 12:44:06.712456  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.712481  294742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:44:06.712547  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.733444  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.734066  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.901646  294742 ssh_runner.go:195] Run: systemctl --version
	I1020 12:44:06.909756  294742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:44:06.959193  294742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:44:06.965303  294742 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:44:06.965375  294742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:44:07.008888  294742 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:44:07.008917  294742 start.go:495] detecting cgroup driver to use...
	I1020 12:44:07.008952  294742 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:44:07.009004  294742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:44:07.035141  294742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:44:07.054137  294742 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:44:07.054211  294742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:44:07.079378  294742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:44:07.103118  294742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:44:07.209760  294742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:44:07.321594  294742 docker.go:234] disabling docker service ...
	I1020 12:44:07.321668  294742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:44:07.345181  294742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:44:07.360697  294742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:44:07.490013  294742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:44:07.602041  294742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:44:07.618630  294742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:44:07.634357  294742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:44:07.634417  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.645469  294742 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:44:07.645610  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.655809  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.666214  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.675458  294742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:44:07.684099  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.693633  294742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.709957  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.719613  294742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:44:07.727987  294742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:44:07.736655  294742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:07.845121  294742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:44:07.994845  294742 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:44:07.994921  294742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:44:08.000289  294742 start.go:563] Will wait 60s for crictl version
	I1020 12:44:08.000357  294742 ssh_runner.go:195] Run: which crictl
	I1020 12:44:08.005502  294742 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:44:08.053214  294742 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:44:08.053344  294742 ssh_runner.go:195] Run: crio --version
	I1020 12:44:08.093432  294742 ssh_runner.go:195] Run: crio --version
	I1020 12:44:08.137011  294742 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:44:08.138570  294742 cli_runner.go:164] Run: docker network inspect calico-312375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:44:08.164351  294742 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:44:08.169095  294742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:44:08.183871  294742 kubeadm.go:883] updating cluster {Name:calico-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:44:08.184022  294742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:44:08.184099  294742 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:44:08.238216  294742 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:44:08.238239  294742 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:44:08.238293  294742 ssh_runner.go:195] Run: sudo crictl images --output json
	W1020 12:44:05.973622  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	W1020 12:44:07.974822  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	I1020 12:44:05.583417  296590 out.go:252] * Updating the running docker "kubernetes-upgrade-196539" container ...
	I1020 12:44:05.583447  296590 machine.go:93] provisionDockerMachine start ...
	I1020 12:44:05.583527  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:05.604993  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.605324  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:05.605340  296590 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:44:05.749240  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-196539
	
	I1020 12:44:05.749284  296590 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-196539"
	I1020 12:44:05.749366  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:05.768474  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.768811  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:05.768830  296590 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-196539 && echo "kubernetes-upgrade-196539" | sudo tee /etc/hostname
	I1020 12:44:05.925416  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-196539
	
	I1020 12:44:05.925527  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:05.951506  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.951737  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:05.951756  296590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-196539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-196539/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-196539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:44:06.100129  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:44:06.100159  296590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:44:06.100193  296590 ubuntu.go:190] setting up certificates
	I1020 12:44:06.100218  296590 provision.go:84] configureAuth start
	I1020 12:44:06.100297  296590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:44:06.121903  296590 provision.go:143] copyHostCerts
	I1020 12:44:06.121971  296590 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:44:06.121990  296590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:44:06.122058  296590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:44:06.122213  296590 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:44:06.122227  296590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:44:06.122258  296590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:44:06.122351  296590 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:44:06.122362  296590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:44:06.122389  296590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:44:06.122471  296590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-196539 san=[127.0.0.1 192.168.94.2 kubernetes-upgrade-196539 localhost minikube]
	I1020 12:44:06.329688  296590 provision.go:177] copyRemoteCerts
	I1020 12:44:06.329755  296590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:44:06.329813  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:06.351301  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:06.455971  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1020 12:44:06.477331  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:44:06.496977  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:44:06.517946  296590 provision.go:87] duration metric: took 417.714045ms to configureAuth
	I1020 12:44:06.517975  296590 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:44:06.518142  296590 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:06.518276  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:06.538300  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:06.538604  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:06.538622  296590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:44:07.125257  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:44:07.125291  296590 machine.go:96] duration metric: took 1.541836316s to provisionDockerMachine
	I1020 12:44:07.125306  296590 start.go:293] postStartSetup for "kubernetes-upgrade-196539" (driver="docker")
	I1020 12:44:07.125320  296590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:44:07.125407  296590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:44:07.125459  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.156037  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.270284  296590 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:44:07.274561  296590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:44:07.274587  296590 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:44:07.274597  296590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:44:07.274652  296590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:44:07.274750  296590 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:44:07.274938  296590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:44:07.284152  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:44:07.305077  296590 start.go:296] duration metric: took 179.754124ms for postStartSetup
	I1020 12:44:07.305165  296590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:44:07.305214  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.328685  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.437134  296590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:44:07.450069  296590 fix.go:56] duration metric: took 1.888081667s for fixHost
	I1020 12:44:07.450098  296590 start.go:83] releasing machines lock for "kubernetes-upgrade-196539", held for 1.888138292s
	I1020 12:44:07.450181  296590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:44:07.475193  296590 ssh_runner.go:195] Run: cat /version.json
	I1020 12:44:07.475254  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.475274  296590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:44:07.475358  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.501689  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.502601  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.692643  296590 ssh_runner.go:195] Run: systemctl --version
	I1020 12:44:07.700839  296590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:44:07.751879  296590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:44:07.757818  296590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:44:07.757921  296590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:44:07.772191  296590 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:44:07.772218  296590 start.go:495] detecting cgroup driver to use...
	I1020 12:44:07.772335  296590 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:44:07.772426  296590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:44:07.791034  296590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:44:07.807071  296590 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:44:07.807190  296590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:44:07.827015  296590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:44:07.844569  296590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:44:07.982039  296590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:44:08.118144  296590 docker.go:234] disabling docker service ...
	I1020 12:44:08.118212  296590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:44:08.138895  296590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:44:08.158142  296590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:44:08.308529  296590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:44:08.457448  296590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:44:08.477945  296590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:44:08.498084  296590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:44:08.498182  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.515114  296590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:44:08.515186  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.534801  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.548353  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.561190  296590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:44:08.571009  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.583439  296590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.594865  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.606506  296590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:44:08.615110  296590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:44:08.623381  296590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:08.767604  296590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:44:12.431877  290109 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:44:12.431934  290109 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:44:12.432027  290109 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:44:12.432076  290109 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:44:12.432113  290109 kubeadm.go:318] OS: Linux
	I1020 12:44:12.432199  290109 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:44:12.432295  290109 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:44:12.432339  290109 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:44:12.432432  290109 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:44:12.432503  290109 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:44:12.432561  290109 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:44:12.432603  290109 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:44:12.432673  290109 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:44:12.432812  290109 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:44:12.432949  290109 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:44:12.433084  290109 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:44:12.433179  290109 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:44:12.434827  290109 out.go:252]   - Generating certificates and keys ...
	I1020 12:44:12.434923  290109 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:44:12.435020  290109 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:44:12.435103  290109 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:44:12.435189  290109 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:44:12.435288  290109 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:44:12.435362  290109 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:44:12.435444  290109 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:44:12.435622  290109 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-312375 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1020 12:44:12.435694  290109 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:44:12.435880  290109 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-312375 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1020 12:44:12.435942  290109 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:44:12.436011  290109 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:44:12.436091  290109 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:44:12.436169  290109 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:44:12.436249  290109 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:44:12.436356  290109 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:44:12.436436  290109 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:44:12.436543  290109 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:44:12.436627  290109 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:44:12.436746  290109 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:44:12.436876  290109 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:44:12.439264  290109 out.go:252]   - Booting up control plane ...
	I1020 12:44:12.439356  290109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:44:12.439424  290109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:44:12.439482  290109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:44:12.439607  290109 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:44:12.439727  290109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:44:12.439860  290109 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:44:12.439964  290109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:44:12.440017  290109 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:44:12.440134  290109 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:44:12.440230  290109 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:44:12.440286  290109 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.490951ms
	I1020 12:44:12.440374  290109 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:44:12.440448  290109 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1020 12:44:12.440526  290109 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:44:12.440598  290109 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:44:12.440691  290109 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.989159483s
	I1020 12:44:12.440761  290109 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.558674995s
	I1020 12:44:12.440832  290109 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.50220264s
	I1020 12:44:12.440923  290109 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:44:12.441035  290109 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:44:12.441094  290109 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:44:12.441261  290109 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-312375 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:44:12.441317  290109 kubeadm.go:318] [bootstrap-token] Using token: zldgui.lrpvkfzs6byfp132
	I1020 12:44:12.442712  290109 out.go:252]   - Configuring RBAC rules ...
	I1020 12:44:12.442845  290109 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:44:12.442949  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:44:12.443140  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:44:12.443304  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:44:12.443439  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:44:12.443553  290109 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:44:12.443736  290109 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:44:12.443808  290109 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:44:12.443860  290109 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:44:12.443867  290109 kubeadm.go:318] 
	I1020 12:44:12.443913  290109 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:44:12.443919  290109 kubeadm.go:318] 
	I1020 12:44:12.443986  290109 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:44:12.443998  290109 kubeadm.go:318] 
	I1020 12:44:12.444047  290109 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:44:12.444135  290109 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:44:12.444194  290109 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:44:12.444203  290109 kubeadm.go:318] 
	I1020 12:44:12.444285  290109 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:44:12.444296  290109 kubeadm.go:318] 
	I1020 12:44:12.444336  290109 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:44:12.444342  290109 kubeadm.go:318] 
	I1020 12:44:12.444405  290109 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:44:12.444512  290109 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:44:12.444606  290109 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:44:12.444615  290109 kubeadm.go:318] 
	I1020 12:44:12.444727  290109 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:44:12.444854  290109 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:44:12.444862  290109 kubeadm.go:318] 
	I1020 12:44:12.444982  290109 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zldgui.lrpvkfzs6byfp132 \
	I1020 12:44:12.445139  290109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:44:12.445162  290109 kubeadm.go:318] 	--control-plane 
	I1020 12:44:12.445167  290109 kubeadm.go:318] 
	I1020 12:44:12.445286  290109 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:44:12.445297  290109 kubeadm.go:318] 
	I1020 12:44:12.445420  290109 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zldgui.lrpvkfzs6byfp132 \
	I1020 12:44:12.445588  290109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:44:12.445599  290109 cni.go:84] Creating CNI manager for "kindnet"
	I1020 12:44:12.447104  290109 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1020 12:44:10.474446  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	I1020 12:44:11.474426  282174 pod_ready.go:94] pod "coredns-66bc5c9577-vpzk5" is "Ready"
	I1020 12:44:11.474455  282174 pod_ready.go:86] duration metric: took 37.006911205s for pod "coredns-66bc5c9577-vpzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.477456  282174 pod_ready.go:83] waiting for pod "etcd-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.482241  282174 pod_ready.go:94] pod "etcd-embed-certs-907116" is "Ready"
	I1020 12:44:11.482264  282174 pod_ready.go:86] duration metric: took 4.783797ms for pod "etcd-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.484429  282174 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.488616  282174 pod_ready.go:94] pod "kube-apiserver-embed-certs-907116" is "Ready"
	I1020 12:44:11.488637  282174 pod_ready.go:86] duration metric: took 4.185977ms for pod "kube-apiserver-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.490659  282174 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.672972  282174 pod_ready.go:94] pod "kube-controller-manager-embed-certs-907116" is "Ready"
	I1020 12:44:11.673006  282174 pod_ready.go:86] duration metric: took 182.327383ms for pod "kube-controller-manager-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.871013  282174 pod_ready.go:83] waiting for pod "kube-proxy-s2xbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.270962  282174 pod_ready.go:94] pod "kube-proxy-s2xbv" is "Ready"
	I1020 12:44:12.270992  282174 pod_ready.go:86] duration metric: took 399.955657ms for pod "kube-proxy-s2xbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.471348  282174 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.870964  282174 pod_ready.go:94] pod "kube-scheduler-embed-certs-907116" is "Ready"
	I1020 12:44:12.870988  282174 pod_ready.go:86] duration metric: took 399.618167ms for pod "kube-scheduler-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.870999  282174 pod_ready.go:40] duration metric: took 38.406812384s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:44:12.924876  282174 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:44:12.926672  282174 out.go:179] * Done! kubectl is now configured to use "embed-certs-907116" cluster and "default" namespace by default
	I1020 12:44:08.277351  294742 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:44:08.277377  294742 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:44:08.277386  294742 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:44:08.277494  294742 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-312375 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1020 12:44:08.277576  294742 ssh_runner.go:195] Run: crio config
	I1020 12:44:08.355011  294742 cni.go:84] Creating CNI manager for "calico"
	I1020 12:44:08.355055  294742 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:44:08.355085  294742 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-312375 NodeName:calico-312375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:44:08.355265  294742 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-312375"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:44:08.355336  294742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:44:08.367940  294742 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:44:08.368021  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:44:08.386983  294742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1020 12:44:08.409010  294742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:44:08.431202  294742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1020 12:44:08.455323  294742 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:44:08.460990  294742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:44:08.476996  294742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:08.597660  294742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:44:08.632247  294742 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375 for IP: 192.168.85.2
	I1020 12:44:08.632283  294742 certs.go:195] generating shared ca certs ...
	I1020 12:44:08.632303  294742 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:08.632459  294742 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:44:08.632522  294742 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:44:08.632532  294742 certs.go:257] generating profile certs ...
	I1020 12:44:08.632601  294742 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.key
	I1020 12:44:08.632619  294742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.crt with IP's: []
	I1020 12:44:09.202850  294742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.crt ...
	I1020 12:44:09.202877  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.crt: {Name:mkbdec429d4cbda4fb9bc977f19afd051ce3355d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.203074  294742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.key ...
	I1020 12:44:09.203085  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.key: {Name:mk7ff7f7b99fe7d84ed5cb3c6639b23b253ed35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.203168  294742 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f
	I1020 12:44:09.203183  294742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1020 12:44:09.266475  294742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f ...
	I1020 12:44:09.266510  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f: {Name:mk0fabf3fcd389c49d8e41b45fc5dcfbc97753e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.266711  294742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f ...
	I1020 12:44:09.266738  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f: {Name:mk87d98dfeeb2648e041ef4287a4b054cbeaeb28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.266886  294742 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt
	I1020 12:44:09.267005  294742 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key
	I1020 12:44:09.267115  294742 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key
	I1020 12:44:09.267136  294742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt with IP's: []
	I1020 12:44:09.828233  294742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt ...
	I1020 12:44:09.828260  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt: {Name:mkbdd621c1182dfd2366cefca902df57d087dc5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.828470  294742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key ...
	I1020 12:44:09.828486  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key: {Name:mkec8558969c7bf7b65ab79964cbf5f89003acf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.828714  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:44:09.828765  294742 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:44:09.828794  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:44:09.828825  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:44:09.828856  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:44:09.828888  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:44:09.828943  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:44:09.829578  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:44:09.848292  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:44:09.866601  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:44:09.884359  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:44:09.903577  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 12:44:09.921254  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:44:09.939055  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:44:09.957033  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:44:09.975799  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:44:09.995242  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:44:10.013648  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:44:10.035020  294742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:44:10.050006  294742 ssh_runner.go:195] Run: openssl version
	I1020 12:44:10.057233  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:44:10.068165  294742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:44:10.073118  294742 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:44:10.073176  294742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:44:10.123459  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:44:10.135609  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:44:10.147033  294742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:44:10.151755  294742 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:44:10.151824  294742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:44:10.198624  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:44:10.209926  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:44:10.220167  294742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:44:10.224551  294742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:44:10.224621  294742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:44:10.269750  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:44:10.280228  294742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:44:10.284865  294742 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:44:10.284926  294742 kubeadm.go:400] StartCluster: {Name:calico-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:44:10.285003  294742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:44:10.285062  294742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:44:10.316297  294742 cri.go:89] found id: ""
	I1020 12:44:10.316373  294742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:44:10.326187  294742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:44:10.335518  294742 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:44:10.335580  294742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:44:10.344858  294742 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:44:10.344898  294742 kubeadm.go:157] found existing configuration files:
	
	I1020 12:44:10.344954  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:44:10.354104  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:44:10.354165  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:44:10.363401  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:44:10.372401  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:44:10.372470  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:44:10.381702  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:44:10.391079  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:44:10.391140  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:44:10.400839  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:44:10.410272  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:44:10.410337  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:44:10.420211  294742 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:44:10.493661  294742 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:44:10.565558  294742 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:44:12.448195  290109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:44:12.452640  290109 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:44:12.452672  290109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:44:12.466669  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:44:12.692968  290109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:44:12.693100  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:12.693195  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-312375 minikube.k8s.io/updated_at=2025_10_20T12_44_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=kindnet-312375 minikube.k8s.io/primary=true
	I1020 12:44:12.706185  290109 ops.go:34] apiserver oom_adj: -16
	I1020 12:44:12.806651  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:13.307466  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:13.806959  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:14.307287  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:14.806956  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:15.307599  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:15.807545  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:16.306793  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:16.806878  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:17.307479  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:17.406027  290109 kubeadm.go:1113] duration metric: took 4.712979593s to wait for elevateKubeSystemPrivileges
	I1020 12:44:17.406067  290109 kubeadm.go:402] duration metric: took 17.573398912s to StartCluster
	I1020 12:44:17.406098  290109 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:17.406177  290109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:44:17.408438  290109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:17.408697  290109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:44:17.408718  290109 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:44:17.409152  290109 config.go:182] Loaded profile config "kindnet-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:17.409124  290109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:44:17.409214  290109 addons.go:69] Setting storage-provisioner=true in profile "kindnet-312375"
	I1020 12:44:17.409232  290109 addons.go:238] Setting addon storage-provisioner=true in "kindnet-312375"
	I1020 12:44:17.409261  290109 host.go:66] Checking if "kindnet-312375" exists ...
	I1020 12:44:17.409276  290109 addons.go:69] Setting default-storageclass=true in profile "kindnet-312375"
	I1020 12:44:17.409294  290109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-312375"
	I1020 12:44:17.409624  290109 cli_runner.go:164] Run: docker container inspect kindnet-312375 --format={{.State.Status}}
	I1020 12:44:17.409808  290109 cli_runner.go:164] Run: docker container inspect kindnet-312375 --format={{.State.Status}}
	I1020 12:44:17.412199  290109 out.go:179] * Verifying Kubernetes components...
	I1020 12:44:17.414746  290109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:17.436653  290109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:44:17.437180  290109 addons.go:238] Setting addon default-storageclass=true in "kindnet-312375"
	I1020 12:44:17.437225  290109 host.go:66] Checking if "kindnet-312375" exists ...
	I1020 12:44:17.437692  290109 cli_runner.go:164] Run: docker container inspect kindnet-312375 --format={{.State.Status}}
	I1020 12:44:17.438151  290109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:17.438173  290109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:44:17.438229  290109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-312375
	I1020 12:44:17.465568  290109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:17.465592  290109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:44:17.465651  290109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-312375
	I1020 12:44:17.465986  290109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kindnet-312375/id_rsa Username:docker}
	I1020 12:44:17.487518  290109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kindnet-312375/id_rsa Username:docker}
	I1020 12:44:17.504894  290109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:44:17.569976  290109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:44:17.587395  290109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:17.604809  290109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:17.684900  290109 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1020 12:44:17.686623  290109 node_ready.go:35] waiting up to 15m0s for node "kindnet-312375" to be "Ready" ...
	I1020 12:44:17.906437  290109 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:44:17.907707  290109 addons.go:514] duration metric: took 498.582147ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:44:18.188900  290109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-312375" context rescaled to 1 replicas
	I1020 12:44:20.251338  294742 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:44:20.251408  294742 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:44:20.251516  294742 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:44:20.251595  294742 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:44:20.251647  294742 kubeadm.go:318] OS: Linux
	I1020 12:44:20.251718  294742 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:44:20.251814  294742 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:44:20.251885  294742 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:44:20.251980  294742 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:44:20.252121  294742 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:44:20.252176  294742 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:44:20.252218  294742 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:44:20.252261  294742 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:44:20.252326  294742 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:44:20.252432  294742 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:44:20.252562  294742 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:44:20.252654  294742 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:44:20.254331  294742 out.go:252]   - Generating certificates and keys ...
	I1020 12:44:20.254421  294742 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:44:20.254523  294742 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:44:20.254619  294742 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:44:20.254709  294742 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:44:20.254834  294742 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:44:20.254912  294742 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:44:20.255011  294742 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:44:20.255126  294742 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-312375 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:44:20.255173  294742 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:44:20.255289  294742 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-312375 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:44:20.255421  294742 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:44:20.255484  294742 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:44:20.255542  294742 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:44:20.255603  294742 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:44:20.255650  294742 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:44:20.255703  294742 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:44:20.255752  294742 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:44:20.255849  294742 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:44:20.255909  294742 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:44:20.255979  294742 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:44:20.256036  294742 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:44:20.258256  294742 out.go:252]   - Booting up control plane ...
	I1020 12:44:20.258376  294742 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:44:20.258491  294742 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:44:20.258594  294742 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:44:20.258738  294742 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:44:20.258881  294742 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:44:20.259008  294742 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:44:20.259099  294742 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:44:20.259136  294742 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:44:20.259260  294742 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:44:20.259364  294742 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:44:20.259415  294742 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001801365s
	I1020 12:44:20.259511  294742 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:44:20.259624  294742 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1020 12:44:20.259763  294742 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:44:20.259919  294742 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:44:20.259993  294742 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.445791212s
	I1020 12:44:20.260051  294742 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.923123487s
	I1020 12:44:20.260120  294742 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502011455s
	I1020 12:44:20.260231  294742 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:44:20.260368  294742 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:44:20.260465  294742 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:44:20.260759  294742 kubeadm.go:318] [mark-control-plane] Marking the node calico-312375 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:44:20.260868  294742 kubeadm.go:318] [bootstrap-token] Using token: tjsqif.gnw9gi313y3h01f3
	I1020 12:44:20.262298  294742 out.go:252]   - Configuring RBAC rules ...
	I1020 12:44:20.262400  294742 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:44:20.262478  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:44:20.262600  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:44:20.262747  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:44:20.262886  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:44:20.262960  294742 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:44:20.263105  294742 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:44:20.263169  294742 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:44:20.263218  294742 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:44:20.263228  294742 kubeadm.go:318] 
	I1020 12:44:20.263304  294742 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:44:20.263314  294742 kubeadm.go:318] 
	I1020 12:44:20.263427  294742 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:44:20.263449  294742 kubeadm.go:318] 
	I1020 12:44:20.263489  294742 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:44:20.263584  294742 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:44:20.263659  294742 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:44:20.263668  294742 kubeadm.go:318] 
	I1020 12:44:20.263745  294742 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:44:20.263755  294742 kubeadm.go:318] 
	I1020 12:44:20.263853  294742 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:44:20.263864  294742 kubeadm.go:318] 
	I1020 12:44:20.263938  294742 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:44:20.264058  294742 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:44:20.264131  294742 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:44:20.264137  294742 kubeadm.go:318] 
	I1020 12:44:20.264211  294742 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:44:20.264281  294742 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:44:20.264286  294742 kubeadm.go:318] 
	I1020 12:44:20.264381  294742 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tjsqif.gnw9gi313y3h01f3 \
	I1020 12:44:20.264485  294742 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:44:20.264506  294742 kubeadm.go:318] 	--control-plane 
	I1020 12:44:20.264525  294742 kubeadm.go:318] 
	I1020 12:44:20.264603  294742 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:44:20.264609  294742 kubeadm.go:318] 
	I1020 12:44:20.264686  294742 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tjsqif.gnw9gi313y3h01f3 \
	I1020 12:44:20.264811  294742 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:44:20.264823  294742 cni.go:84] Creating CNI manager for "calico"
	I1020 12:44:20.266493  294742 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1020 12:44:20.269071  294742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:44:20.269092  294742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1020 12:44:20.284385  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:44:21.102888  294742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:44:21.102967  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:21.103016  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-312375 minikube.k8s.io/updated_at=2025_10_20T12_44_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=calico-312375 minikube.k8s.io/primary=true
	I1020 12:44:21.115039  294742 ops.go:34] apiserver oom_adj: -16
	I1020 12:44:21.176324  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:21.676988  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:22.177282  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:22.676977  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:23.176821  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1020 12:44:19.690228  290109 node_ready.go:57] node "kindnet-312375" has "Ready":"False" status (will retry)
	W1020 12:44:21.690294  290109 node_ready.go:57] node "kindnet-312375" has "Ready":"False" status (will retry)
	I1020 12:44:23.677177  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:24.176970  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:24.676553  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:24.744631  294742 kubeadm.go:1113] duration metric: took 3.641723429s to wait for elevateKubeSystemPrivileges
	I1020 12:44:24.744673  294742 kubeadm.go:402] duration metric: took 14.459752641s to StartCluster
	I1020 12:44:24.744691  294742 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:24.744752  294742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:44:24.746545  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:24.746805  294742 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:44:24.746869  294742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:44:24.746869  294742 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:44:24.746967  294742 addons.go:69] Setting storage-provisioner=true in profile "calico-312375"
	I1020 12:44:24.746988  294742 addons.go:238] Setting addon storage-provisioner=true in "calico-312375"
	I1020 12:44:24.747025  294742 host.go:66] Checking if "calico-312375" exists ...
	I1020 12:44:24.747029  294742 addons.go:69] Setting default-storageclass=true in profile "calico-312375"
	I1020 12:44:24.747054  294742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-312375"
	I1020 12:44:24.747052  294742 config.go:182] Loaded profile config "calico-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:24.747420  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:24.747600  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:24.748658  294742 out.go:179] * Verifying Kubernetes components...
	I1020 12:44:24.750265  294742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:24.770805  294742 addons.go:238] Setting addon default-storageclass=true in "calico-312375"
	I1020 12:44:24.770848  294742 host.go:66] Checking if "calico-312375" exists ...
	I1020 12:44:24.771020  294742 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:44:24.771246  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:24.772464  294742 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:24.772484  294742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:44:24.772534  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:24.799721  294742 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:24.799747  294742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:44:24.799831  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:24.802375  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:24.823968  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:24.832124  294742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:44:24.886570  294742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:44:24.918328  294742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:24.937681  294742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:25.007568  294742 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 12:44:25.009316  294742 node_ready.go:35] waiting up to 15m0s for node "calico-312375" to be "Ready" ...
	I1020 12:44:25.248074  294742 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 20 12:43:58 embed-certs-907116 crio[570]: time="2025-10-20T12:43:58.197851192Z" level=info msg="Started container" PID=1764 containerID=ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper id=e31c43bb-063b-4610-9be7-b0bc3fd8f733 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90011cc03ba47a375594dbf61912e2481623b1f883674afc397ae5b761f0bbdd
	Oct 20 12:43:58 embed-certs-907116 crio[570]: time="2025-10-20T12:43:58.266236995Z" level=info msg="Removing container: 075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e" id=58298785-8705-4433-a00a-87fab5bd4053 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:58 embed-certs-907116 crio[570]: time="2025-10-20T12:43:58.283248039Z" level=info msg="Removed container 075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=58298785-8705-4433-a00a-87fab5bd4053 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.285273955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cca4ef92-a93c-495d-9a12-b9944b59d359 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.286837765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ab438d4a-0e8a-4f4c-ba5f-95200ac8bd6b name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.288669212Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=851ba572-c974-496c-a99e-0c9172565a3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.288819131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.293851634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.294066413Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32df45fc59921da7bd95c993752f0ac3cb7a35e101ef6848510b9dd88bbe3e29/merged/etc/passwd: no such file or directory"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.294105577Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32df45fc59921da7bd95c993752f0ac3cb7a35e101ef6848510b9dd88bbe3e29/merged/etc/group: no such file or directory"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.294850571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.324758978Z" level=info msg="Created container 8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95: kube-system/storage-provisioner/storage-provisioner" id=851ba572-c974-496c-a99e-0c9172565a3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.32556516Z" level=info msg="Starting container: 8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95" id=ede38d2f-f86e-4bdb-8f53-8bc91bb276f9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.32792419Z" level=info msg="Started container" PID=1778 containerID=8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95 description=kube-system/storage-provisioner/storage-provisioner id=ede38d2f-f86e-4bdb-8f53-8bc91bb276f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=83fad4c5d6e9b8987a656f19af4557cc7b5171129bf9b4088834ea96f49e4483
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.146714053Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=01186580-ad71-4914-907d-953a8c756316 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.14770439Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5bbdc968-3c29-49cf-b405-85fd4d684faf name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.148793943Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=544f2f18-addf-47ea-bb81-a657e998c7e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.148949924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.154468673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.154989711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.178971346Z" level=info msg="Created container d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=544f2f18-addf-47ea-bb81-a657e998c7e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.17973574Z" level=info msg="Starting container: d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277" id=5c582472-891d-4a7d-b1bb-712240090a34 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.182021807Z" level=info msg="Started container" PID=1814 containerID=d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper id=5c582472-891d-4a7d-b1bb-712240090a34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90011cc03ba47a375594dbf61912e2481623b1f883674afc397ae5b761f0bbdd
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.345210969Z" level=info msg="Removing container: ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608" id=3eac7815-a8ee-4947-93fb-9fe7bdc4bde0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.354872936Z" level=info msg="Removed container ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=3eac7815-a8ee-4947-93fb-9fe7bdc4bde0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d552900c00f66       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   90011cc03ba47       dashboard-metrics-scraper-6ffb444bf9-qsxps   kubernetes-dashboard
	8d38623393b88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   83fad4c5d6e9b       storage-provisioner                          kube-system
	d1e0d8719fc2a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   2c16c2ec4274e       kubernetes-dashboard-855c9754f9-hm4nh        kubernetes-dashboard
	15ef78b953e81       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   69a8e7af48f96       coredns-66bc5c9577-vpzk5                     kube-system
	c783e004bdaed       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   3cdf8b8f17e38       busybox                                      default
	43d66af915825       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   db646b1767e51       kindnet-24g82                                kube-system
	9f6137e79a6af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   8d8ed7b2b862b       kube-proxy-s2xbv                             kube-system
	e624948cc12c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   83fad4c5d6e9b       storage-provisioner                          kube-system
	71b8d519c8fcf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   47b0b7aeb88d6       etcd-embed-certs-907116                      kube-system
	b16a54394efdc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   21f51af56dc9e       kube-controller-manager-embed-certs-907116   kube-system
	22cf3642d99bb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   ae7b39f737646       kube-apiserver-embed-certs-907116            kube-system
	c4cc4d9df25ab       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   83effa6393e5a       kube-scheduler-embed-certs-907116            kube-system
	
	
	==> coredns [15ef78b953e819a004b34f819cf429a261c9139ff8a41f2f50eede4db5a65bde] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53685 - 3538 "HINFO IN 7592767640842329698.2329089204509032448. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.41492157s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-907116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-907116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=embed-certs-907116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-907116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:44:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-907116
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6a5dfc3b-6ef1-4198-ad94-963e2bd73b87
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-vpzk5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-907116                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-24g82                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-907116             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-907116    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-s2xbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-907116             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qsxps    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hm4nh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node embed-certs-907116 event: Registered Node embed-certs-907116 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-907116 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node embed-certs-907116 event: Registered Node embed-certs-907116 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777] <==
	{"level":"warn","ts":"2025-10-20T12:43:32.236019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.246675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.259876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.266712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.272917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.280352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.287380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.294593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.301381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.308239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.316004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.323511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.331155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.338501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.345268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.352430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.359481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.366763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.373550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.381539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.394720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.403268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.410673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.466723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50928","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:44:03.937327Z","caller":"traceutil/trace.go:172","msg":"trace[169735283] transaction","detail":"{read_only:false; response_revision:652; number_of_response:1; }","duration":"211.310798ms","start":"2025-10-20T12:44:03.725996Z","end":"2025-10-20T12:44:03.937307Z","steps":["trace[169735283] 'process raft request'  (duration: 211.155082ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:44:28 up  1:26,  0 user,  load average: 4.20, 3.63, 2.40
	Linux embed-certs-907116 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [43d66af915825ae45fb963115486c3a36542c4a768ce4d5fed2ff9bc19ed78cc] <==
	I1020 12:43:33.692641       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:43:33.692910       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 12:43:33.693102       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:43:33.693122       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:43:33.693146       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:43:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:43:33.990164       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:43:33.990288       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:43:33.990310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:43:33.990635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:43:34.290707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:43:34.290741       1 metrics.go:72] Registering metrics
	I1020 12:43:34.290857       1 controller.go:711] "Syncing nftables rules"
	I1020 12:43:43.896846       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:43:43.896911       1 main.go:301] handling current node
	I1020 12:43:53.901074       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:43:53.901151       1 main.go:301] handling current node
	I1020 12:44:03.896725       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:44:03.896832       1 main.go:301] handling current node
	I1020 12:44:13.899001       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:44:13.899042       1 main.go:301] handling current node
	I1020 12:44:23.896908       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:44:23.896952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6] <==
	I1020 12:43:32.945995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:43:32.946218       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 12:43:32.949198       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:43:32.949240       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:43:32.949256       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:43:32.949261       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:43:32.949205       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:43:32.949413       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 12:43:32.949593       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1020 12:43:32.954878       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 12:43:32.970398       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:43:32.975240       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:43:32.979823       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:43:33.240288       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:43:33.270297       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:43:33.273465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:43:33.273465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:43:33.297017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:43:33.303962       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:43:33.344952       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.63.255"}
	I1020 12:43:33.355395       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.195.75"}
	I1020 12:43:33.849466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:43:36.511725       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:43:36.711646       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:43:36.763003       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e] <==
	I1020 12:43:36.308713       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 12:43:36.308724       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:43:36.308750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:43:36.308709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 12:43:36.309075       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 12:43:36.309098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:43:36.309129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:43:36.309144       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:43:36.309158       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:43:36.309186       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-907116"
	I1020 12:43:36.309240       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 12:43:36.310177       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:43:36.311862       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:43:36.313719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:43:36.313719       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:43:36.314858       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:43:36.316686       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:43:36.318885       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:43:36.318907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:43:36.319003       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:43:36.320073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:43:36.322291       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:43:36.324559       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:43:36.326814       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:43:36.334237       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9f6137e79a6af824320fb4d2c61c014d11280ad3d72aaf8477198b8a808bfe57] <==
	I1020 12:43:33.542103       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:43:33.608747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:43:33.709323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:43:33.709373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 12:43:33.709493       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:43:33.728366       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:43:33.728423       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:43:33.733993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:43:33.734439       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:43:33.734455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:43:33.735643       1 config.go:309] "Starting node config controller"
	I1020 12:43:33.735646       1 config.go:200] "Starting service config controller"
	I1020 12:43:33.735663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:43:33.735666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:43:33.735672       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:43:33.735763       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:43:33.735808       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:43:33.735844       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:43:33.735850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:43:33.836631       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:43:33.836654       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:43:33.836633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01] <==
	I1020 12:43:31.875545       1 serving.go:386] Generated self-signed cert in-memory
	I1020 12:43:32.910860       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:43:32.910892       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:43:32.917721       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:43:32.918259       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:43:32.918858       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 12:43:32.918886       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 12:43:32.918931       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:43:32.918941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:43:32.918959       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:43:32.918966       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:43:33.019219       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 12:43:33.019241       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:43:33.019218       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:43:40 embed-certs-907116 kubelet[731]: I1020 12:43:40.217971     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm4nh" podStartSLOduration=1.37196685 podStartE2EDuration="4.217944756s" podCreationTimestamp="2025-10-20 12:43:36 +0000 UTC" firstStartedPulling="2025-10-20 12:43:37.265008206 +0000 UTC m=+7.209263289" lastFinishedPulling="2025-10-20 12:43:40.110986123 +0000 UTC m=+10.055241195" observedRunningTime="2025-10-20 12:43:40.217437593 +0000 UTC m=+10.161692682" watchObservedRunningTime="2025-10-20 12:43:40.217944756 +0000 UTC m=+10.162199844"
	Oct 20 12:43:41 embed-certs-907116 kubelet[731]: I1020 12:43:41.155417     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 12:43:43 embed-certs-907116 kubelet[731]: I1020 12:43:43.216062     731 scope.go:117] "RemoveContainer" containerID="5aab2b11fd27e2090824fe95c2d8b6f4cb0e09435aee22f53cb71a38919a7bfe"
	Oct 20 12:43:44 embed-certs-907116 kubelet[731]: I1020 12:43:44.220597     731 scope.go:117] "RemoveContainer" containerID="5aab2b11fd27e2090824fe95c2d8b6f4cb0e09435aee22f53cb71a38919a7bfe"
	Oct 20 12:43:44 embed-certs-907116 kubelet[731]: I1020 12:43:44.220849     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:44 embed-certs-907116 kubelet[731]: E1020 12:43:44.221057     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:43:45 embed-certs-907116 kubelet[731]: I1020 12:43:45.226458     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:45 embed-certs-907116 kubelet[731]: E1020 12:43:45.226640     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:43:47 embed-certs-907116 kubelet[731]: I1020 12:43:47.235906     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:47 embed-certs-907116 kubelet[731]: E1020 12:43:47.236106     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: I1020 12:43:58.146468     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: I1020 12:43:58.264197     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: I1020 12:43:58.264478     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: E1020 12:43:58.264722     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:44:04 embed-certs-907116 kubelet[731]: I1020 12:44:04.284823     731 scope.go:117] "RemoveContainer" containerID="e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d"
	Oct 20 12:44:07 embed-certs-907116 kubelet[731]: I1020 12:44:07.237040     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:44:07 embed-certs-907116 kubelet[731]: E1020 12:44:07.237376     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: I1020 12:44:23.146092     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: I1020 12:44:23.341352     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: I1020 12:44:23.341807     731 scope.go:117] "RemoveContainer" containerID="d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: E1020 12:44:23.342051     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: kubelet.service: Consumed 1.826s CPU time.
	
	
	==> kubernetes-dashboard [d1e0d8719fc2a02f1a574fface75a559d0703a7f0c071f3f9e982fe3484fee6e] <==
	2025/10/20 12:43:40 Starting overwatch
	2025/10/20 12:43:40 Using namespace: kubernetes-dashboard
	2025/10/20 12:43:40 Using in-cluster config to connect to apiserver
	2025/10/20 12:43:40 Using secret token for csrf signing
	2025/10/20 12:43:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:43:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:43:40 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:43:40 Generating JWE encryption key
	2025/10/20 12:43:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:43:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:43:40 Initializing JWE encryption key from synchronized object
	2025/10/20 12:43:40 Creating in-cluster Sidecar client
	2025/10/20 12:43:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:43:40 Serving insecurely on HTTP port: 9090
	2025/10/20 12:44:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95] <==
	I1020 12:44:04.343286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:44:04.354099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:44:04.354155       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:44:04.356927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:07.813230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:12.073122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:15.671916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:18.726446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:21.748898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:21.753631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:44:21.753831       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:44:21.753966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e684f2b7-228c-4e12-97d9-985f6618132e", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-907116_1ab68dc1-6e36-4572-957a-3eff9ba52811 became leader
	I1020 12:44:21.754146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-907116_1ab68dc1-6e36-4572-957a-3eff9ba52811!
	W1020 12:44:21.755931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:21.759003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:44:21.854507       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-907116_1ab68dc1-6e36-4572-957a-3eff9ba52811!
	W1020 12:44:23.761882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:23.765787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:25.769631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:25.774914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:27.778033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:27.782034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d] <==
	I1020 12:43:33.507428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:44:03.513225       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-907116 -n embed-certs-907116
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-907116 -n embed-certs-907116: exit status 2 (320.905225ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-907116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-907116
helpers_test.go:243: (dbg) docker inspect embed-certs-907116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff",
	        "Created": "2025-10-20T12:42:20.232246368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:43:23.947248853Z",
	            "FinishedAt": "2025-10-20T12:43:23.111913642Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/hosts",
	        "LogPath": "/var/lib/docker/containers/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff/dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff-json.log",
	        "Name": "/embed-certs-907116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-907116:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-907116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dde9a162828ecfa6dfaac0e46559541a3114fa1461b7bb93e6a690e3f54820ff",
	                "LowerDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a-init/diff:/var/lib/docker/overlay2/44e78e3d58692260e3dd2c0921b9e8e68ab8d324e463358ced8e8a5f3ea3f72d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/93acb91eef931e2da86d1db5f88d8dcfad07ad2799ddc8a67593938796c2477a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-907116",
	                "Source": "/var/lib/docker/volumes/embed-certs-907116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-907116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-907116",
	                "name.minikube.sigs.k8s.io": "embed-certs-907116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4dd377a3e78f245abb9abebbd1c02e79c4568c6c7f2ed56ec280372438a0b231",
	            "SandboxKey": "/var/run/docker/netns/4dd377a3e78f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-907116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:b2:41:43:ff:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e327fc0cc35f5e99ec36d310a3ce8c7214de7f81deb736225deef68fe8ea58b",
	                    "EndpointID": "7de7302fe808da40d714302be8ae3b6fffeae8e89301a3c204c849e112a0940b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-907116",
	                        "dde9a162828e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116: exit status 2 (320.665193ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-907116 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-907116 logs -n 25: (1.128508507s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-312375 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl cat docker --no-pager                                                                                      │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /etc/docker/daemon.json                                                                                          │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo docker system info                                                                                                   │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cri-dockerd --version                                                                                                │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ ssh     │ -p auto-312375 sudo systemctl cat containerd --no-pager                                                                                  │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo cat /etc/containerd/config.toml                                                                                      │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo containerd config dump                                                                                               │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo systemctl cat crio --no-pager                                                                                        │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ ssh     │ -p auto-312375 sudo crio config                                                                                                          │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ delete  │ -p auto-312375                                                                                                                           │ auto-312375               │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │ 20 Oct 25 12:43 UTC │
	│ start   │ -p calico-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio   │ calico-312375             │ jenkins │ v1.37.0 │ 20 Oct 25 12:43 UTC │                     │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-196539 │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │                     │
	│ image   │ embed-certs-907116 image list --format=json                                                                                              │ embed-certs-907116        │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │ 20 Oct 25 12:44 UTC │
	│ pause   │ -p embed-certs-907116 --alsologtostderr -v=1                                                                                             │ embed-certs-907116        │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:44:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:44:05.338558  296590 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:44:05.338678  296590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:44:05.338688  296590 out.go:374] Setting ErrFile to fd 2...
	I1020 12:44:05.338693  296590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:44:05.338931  296590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:44:05.339467  296590 out.go:368] Setting JSON to false
	I1020 12:44:05.340944  296590 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5194,"bootTime":1760959051,"procs":386,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:44:05.341066  296590 start.go:141] virtualization: kvm guest
	I1020 12:44:05.342934  296590 out.go:179] * [kubernetes-upgrade-196539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:44:05.344639  296590 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:44:05.344683  296590 notify.go:220] Checking for updates...
	I1020 12:44:05.347205  296590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:44:05.348843  296590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:44:05.350259  296590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:44:05.351478  296590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:44:05.352682  296590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:44:05.354462  296590 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:05.355188  296590 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:44:05.386914  296590 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:44:05.387044  296590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:44:05.456800  296590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-20 12:44:05.444697997 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:44:05.456900  296590 docker.go:318] overlay module found
	I1020 12:44:05.460940  296590 out.go:179] * Using the docker driver based on existing profile
	I1020 12:44:05.462355  296590 start.go:305] selected driver: docker
	I1020 12:44:05.462374  296590 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:44:05.462464  296590 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:44:05.463033  296590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:44:05.529521  296590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-20 12:44:05.519757381 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:44:05.529835  296590 cni.go:84] Creating CNI manager for ""
	I1020 12:44:05.529894  296590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:44:05.529921  296590 start.go:349] cluster config:
	{Name:kubernetes-upgrade-196539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-196539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:44:05.534344  296590 out.go:179] * Starting "kubernetes-upgrade-196539" primary control-plane node in "kubernetes-upgrade-196539" cluster
	I1020 12:44:05.535806  296590 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:44:05.537362  296590 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:44:05.538737  296590 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:44:05.538800  296590 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 12:44:05.538816  296590 cache.go:58] Caching tarball of preloaded images
	I1020 12:44:05.538814  296590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:44:05.538929  296590 preload.go:233] Found /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 12:44:05.538944  296590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:44:05.539092  296590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/kubernetes-upgrade-196539/config.json ...
	I1020 12:44:05.561814  296590 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:44:05.561835  296590 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:44:05.561855  296590 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:44:05.561881  296590 start.go:360] acquireMachinesLock for kubernetes-upgrade-196539: {Name:mk1d06f9572547ac12885711cb1bcf0c77e257ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:44:05.561947  296590 start.go:364] duration metric: took 43.144µs to acquireMachinesLock for "kubernetes-upgrade-196539"
	I1020 12:44:05.561968  296590 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:44:05.561978  296590 fix.go:54] fixHost starting: 
	I1020 12:44:05.562248  296590 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-196539 --format={{.State.Status}}
	I1020 12:44:05.581398  296590 fix.go:112] recreateIfNeeded on kubernetes-upgrade-196539: state=Running err=<nil>
	W1020 12:44:05.581429  296590 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:44:03.960585  294742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-312375:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.810642037s)
	I1020 12:44:03.960611  294742 kic.go:203] duration metric: took 4.810780457s to extract preloaded images to volume ...
	W1020 12:44:03.960694  294742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1020 12:44:03.960728  294742 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1020 12:44:03.960763  294742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:44:04.042732  294742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-312375 --name calico-312375 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-312375 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-312375 --network calico-312375 --ip 192.168.85.2 --volume calico-312375:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:44:04.392935  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Running}}
	I1020 12:44:04.417928  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:04.443348  294742 cli_runner.go:164] Run: docker exec calico-312375 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:44:04.537118  294742 oci.go:144] the created container "calico-312375" has a running status.
	I1020 12:44:04.537152  294742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa...
	I1020 12:44:04.823733  294742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:44:04.858892  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:04.889022  294742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:44:04.889045  294742 kic_runner.go:114] Args: [docker exec --privileged calico-312375 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:44:04.941923  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:04.970702  294742 machine.go:93] provisionDockerMachine start ...
	I1020 12:44:04.970827  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.000949  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.001216  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:05.001231  294742 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:44:05.164353  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-312375
	
	I1020 12:44:05.164428  294742 ubuntu.go:182] provisioning hostname "calico-312375"
	I1020 12:44:05.164493  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.188350  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.188657  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:05.188679  294742 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-312375 && echo "calico-312375" | sudo tee /etc/hostname
	I1020 12:44:05.359621  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-312375
	
	I1020 12:44:05.359719  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.384710  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.384985  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:05.385018  294742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-312375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-312375/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-312375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:44:05.537683  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:44:05.537717  294742 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:44:05.537757  294742 ubuntu.go:190] setting up certificates
	I1020 12:44:05.537789  294742 provision.go:84] configureAuth start
	I1020 12:44:05.537856  294742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-312375
	I1020 12:44:05.558564  294742 provision.go:143] copyHostCerts
	I1020 12:44:05.558637  294742 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:44:05.558648  294742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:44:05.558736  294742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:44:05.558900  294742 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:44:05.558914  294742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:44:05.558959  294742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:44:05.559075  294742 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:44:05.559088  294742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:44:05.559127  294742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:44:05.559222  294742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.calico-312375 san=[127.0.0.1 192.168.85.2 calico-312375 localhost minikube]
	I1020 12:44:05.892461  294742 provision.go:177] copyRemoteCerts
	I1020 12:44:05.892526  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:44:05.892569  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:05.914703  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.024698  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:44:06.049420  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 12:44:06.070184  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:44:06.089997  294742 provision.go:87] duration metric: took 552.185991ms to configureAuth
	I1020 12:44:06.090037  294742 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:44:06.090193  294742 config.go:182] Loaded profile config "calico-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:06.090300  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.111516  294742 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:06.111760  294742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1020 12:44:06.111814  294742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:44:06.383022  294742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:44:06.383066  294742 machine.go:96] duration metric: took 1.412340954s to provisionDockerMachine
	I1020 12:44:06.383077  294742 client.go:171] duration metric: took 7.84715027s to LocalClient.Create
	I1020 12:44:06.383093  294742 start.go:167] duration metric: took 7.847213295s to libmachine.API.Create "calico-312375"
	I1020 12:44:06.383103  294742 start.go:293] postStartSetup for "calico-312375" (driver="docker")
	I1020 12:44:06.383119  294742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:44:06.383180  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:44:06.383223  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.402180  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.509633  294742 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:44:06.514277  294742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:44:06.514320  294742 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:44:06.514334  294742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:44:06.514394  294742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:44:06.514507  294742 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:44:06.514607  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:44:06.523396  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:44:06.546781  294742 start.go:296] duration metric: took 163.646503ms for postStartSetup
	I1020 12:44:06.547164  294742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-312375
	I1020 12:44:06.567436  294742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/config.json ...
	I1020 12:44:06.567741  294742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:44:06.567812  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.588023  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.687097  294742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:44:06.691969  294742 start.go:128] duration metric: took 8.158497731s to createHost
	I1020 12:44:06.692069  294742 start.go:83] releasing machines lock for "calico-312375", held for 8.158764349s
	I1020 12:44:06.692162  294742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-312375
	I1020 12:44:06.712400  294742 ssh_runner.go:195] Run: cat /version.json
	I1020 12:44:06.712456  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.712481  294742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:44:06.712547  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:06.733444  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.734066  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:06.901646  294742 ssh_runner.go:195] Run: systemctl --version
	I1020 12:44:06.909756  294742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:44:06.959193  294742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:44:06.965303  294742 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:44:06.965375  294742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:44:07.008888  294742 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:44:07.008917  294742 start.go:495] detecting cgroup driver to use...
	I1020 12:44:07.008952  294742 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:44:07.009004  294742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:44:07.035141  294742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:44:07.054137  294742 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:44:07.054211  294742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:44:07.079378  294742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:44:07.103118  294742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:44:07.209760  294742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:44:07.321594  294742 docker.go:234] disabling docker service ...
	I1020 12:44:07.321668  294742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:44:07.345181  294742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:44:07.360697  294742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:44:07.490013  294742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:44:07.602041  294742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:44:07.618630  294742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:44:07.634357  294742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:44:07.634417  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.645469  294742 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:44:07.645610  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.655809  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.666214  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.675458  294742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:44:07.684099  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.693633  294742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.709957  294742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:07.719613  294742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:44:07.727987  294742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:44:07.736655  294742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:07.845121  294742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:44:07.994845  294742 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:44:07.994921  294742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:44:08.000289  294742 start.go:563] Will wait 60s for crictl version
	I1020 12:44:08.000357  294742 ssh_runner.go:195] Run: which crictl
	I1020 12:44:08.005502  294742 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:44:08.053214  294742 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:44:08.053344  294742 ssh_runner.go:195] Run: crio --version
	I1020 12:44:08.093432  294742 ssh_runner.go:195] Run: crio --version
	I1020 12:44:08.137011  294742 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:44:08.138570  294742 cli_runner.go:164] Run: docker network inspect calico-312375 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:44:08.164351  294742 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 12:44:08.169095  294742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:44:08.183871  294742 kubeadm.go:883] updating cluster {Name:calico-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:44:08.184022  294742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:44:08.184099  294742 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:44:08.238216  294742 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:44:08.238239  294742 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:44:08.238293  294742 ssh_runner.go:195] Run: sudo crictl images --output json
	W1020 12:44:05.973622  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	W1020 12:44:07.974822  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	I1020 12:44:05.583417  296590 out.go:252] * Updating the running docker "kubernetes-upgrade-196539" container ...
	I1020 12:44:05.583447  296590 machine.go:93] provisionDockerMachine start ...
	I1020 12:44:05.583527  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:05.604993  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.605324  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:05.605340  296590 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:44:05.749240  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-196539
	
	I1020 12:44:05.749284  296590 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-196539"
	I1020 12:44:05.749366  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:05.768474  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.768811  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:05.768830  296590 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-196539 && echo "kubernetes-upgrade-196539" | sudo tee /etc/hostname
	I1020 12:44:05.925416  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-196539
	
	I1020 12:44:05.925527  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:05.951506  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:05.951737  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:05.951756  296590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-196539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-196539/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-196539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:44:06.100129  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:44:06.100159  296590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-11075/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-11075/.minikube}
	I1020 12:44:06.100193  296590 ubuntu.go:190] setting up certificates
	I1020 12:44:06.100218  296590 provision.go:84] configureAuth start
	I1020 12:44:06.100297  296590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:44:06.121903  296590 provision.go:143] copyHostCerts
	I1020 12:44:06.121971  296590 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem, removing ...
	I1020 12:44:06.121990  296590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem
	I1020 12:44:06.122058  296590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/ca.pem (1082 bytes)
	I1020 12:44:06.122213  296590 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem, removing ...
	I1020 12:44:06.122227  296590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem
	I1020 12:44:06.122258  296590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/cert.pem (1123 bytes)
	I1020 12:44:06.122351  296590 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem, removing ...
	I1020 12:44:06.122362  296590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem
	I1020 12:44:06.122389  296590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-11075/.minikube/key.pem (1679 bytes)
	I1020 12:44:06.122471  296590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-196539 san=[127.0.0.1 192.168.94.2 kubernetes-upgrade-196539 localhost minikube]
	I1020 12:44:06.329688  296590 provision.go:177] copyRemoteCerts
	I1020 12:44:06.329755  296590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:44:06.329813  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:06.351301  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:06.455971  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1020 12:44:06.477331  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:44:06.496977  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:44:06.517946  296590 provision.go:87] duration metric: took 417.714045ms to configureAuth
	I1020 12:44:06.517975  296590 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:44:06.518142  296590 config.go:182] Loaded profile config "kubernetes-upgrade-196539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:06.518276  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:06.538300  296590 main.go:141] libmachine: Using SSH client type: native
	I1020 12:44:06.538604  296590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1020 12:44:06.538622  296590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:44:07.125257  296590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:44:07.125291  296590 machine.go:96] duration metric: took 1.541836316s to provisionDockerMachine
	I1020 12:44:07.125306  296590 start.go:293] postStartSetup for "kubernetes-upgrade-196539" (driver="docker")
	I1020 12:44:07.125320  296590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:44:07.125407  296590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:44:07.125459  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.156037  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.270284  296590 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:44:07.274561  296590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:44:07.274587  296590 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:44:07.274597  296590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/addons for local assets ...
	I1020 12:44:07.274652  296590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-11075/.minikube/files for local assets ...
	I1020 12:44:07.274750  296590 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem -> 145922.pem in /etc/ssl/certs
	I1020 12:44:07.274938  296590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:44:07.284152  296590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:44:07.305077  296590 start.go:296] duration metric: took 179.754124ms for postStartSetup
	I1020 12:44:07.305165  296590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:44:07.305214  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.328685  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.437134  296590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:44:07.450069  296590 fix.go:56] duration metric: took 1.888081667s for fixHost
	I1020 12:44:07.450098  296590 start.go:83] releasing machines lock for "kubernetes-upgrade-196539", held for 1.888138292s
	I1020 12:44:07.450181  296590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-196539
	I1020 12:44:07.475193  296590 ssh_runner.go:195] Run: cat /version.json
	I1020 12:44:07.475254  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.475274  296590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:44:07.475358  296590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-196539
	I1020 12:44:07.501689  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.502601  296590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kubernetes-upgrade-196539/id_rsa Username:docker}
	I1020 12:44:07.692643  296590 ssh_runner.go:195] Run: systemctl --version
	I1020 12:44:07.700839  296590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:44:07.751879  296590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:44:07.757818  296590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:44:07.757921  296590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:44:07.772191  296590 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:44:07.772218  296590 start.go:495] detecting cgroup driver to use...
	I1020 12:44:07.772335  296590 detect.go:190] detected "systemd" cgroup driver on host os
	I1020 12:44:07.772426  296590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:44:07.791034  296590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:44:07.807071  296590 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:44:07.807190  296590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:44:07.827015  296590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:44:07.844569  296590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:44:07.982039  296590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:44:08.118144  296590 docker.go:234] disabling docker service ...
	I1020 12:44:08.118212  296590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:44:08.138895  296590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:44:08.158142  296590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:44:08.308529  296590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:44:08.457448  296590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:44:08.477945  296590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:44:08.498084  296590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:44:08.498182  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.515114  296590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1020 12:44:08.515186  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.534801  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.548353  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.561190  296590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:44:08.571009  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.583439  296590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.594865  296590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:44:08.606506  296590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:44:08.615110  296590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:44:08.623381  296590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:08.767604  296590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:44:12.431877  290109 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:44:12.431934  290109 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:44:12.432027  290109 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:44:12.432076  290109 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:44:12.432113  290109 kubeadm.go:318] OS: Linux
	I1020 12:44:12.432199  290109 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:44:12.432295  290109 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:44:12.432339  290109 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:44:12.432432  290109 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:44:12.432503  290109 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:44:12.432561  290109 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:44:12.432603  290109 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:44:12.432673  290109 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:44:12.432812  290109 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:44:12.432949  290109 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:44:12.433084  290109 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:44:12.433179  290109 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:44:12.434827  290109 out.go:252]   - Generating certificates and keys ...
	I1020 12:44:12.434923  290109 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:44:12.435020  290109 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:44:12.435103  290109 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:44:12.435189  290109 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:44:12.435288  290109 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:44:12.435362  290109 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:44:12.435444  290109 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:44:12.435622  290109 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-312375 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1020 12:44:12.435694  290109 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:44:12.435880  290109 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-312375 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1020 12:44:12.435942  290109 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:44:12.436011  290109 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:44:12.436091  290109 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:44:12.436169  290109 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:44:12.436249  290109 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:44:12.436356  290109 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:44:12.436436  290109 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:44:12.436543  290109 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:44:12.436627  290109 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:44:12.436746  290109 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:44:12.436876  290109 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:44:12.439264  290109 out.go:252]   - Booting up control plane ...
	I1020 12:44:12.439356  290109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:44:12.439424  290109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:44:12.439482  290109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:44:12.439607  290109 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:44:12.439727  290109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:44:12.439860  290109 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:44:12.439964  290109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:44:12.440017  290109 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:44:12.440134  290109 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:44:12.440230  290109 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:44:12.440286  290109 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.490951ms
	I1020 12:44:12.440374  290109 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:44:12.440448  290109 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1020 12:44:12.440526  290109 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:44:12.440598  290109 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:44:12.440691  290109 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.989159483s
	I1020 12:44:12.440761  290109 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.558674995s
	I1020 12:44:12.440832  290109 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.50220264s
	I1020 12:44:12.440923  290109 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:44:12.441035  290109 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:44:12.441094  290109 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:44:12.441261  290109 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-312375 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:44:12.441317  290109 kubeadm.go:318] [bootstrap-token] Using token: zldgui.lrpvkfzs6byfp132
	I1020 12:44:12.442712  290109 out.go:252]   - Configuring RBAC rules ...
	I1020 12:44:12.442845  290109 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:44:12.442949  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:44:12.443140  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:44:12.443304  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:44:12.443439  290109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:44:12.443553  290109 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:44:12.443736  290109 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:44:12.443808  290109 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:44:12.443860  290109 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:44:12.443867  290109 kubeadm.go:318] 
	I1020 12:44:12.443913  290109 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:44:12.443919  290109 kubeadm.go:318] 
	I1020 12:44:12.443986  290109 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:44:12.443998  290109 kubeadm.go:318] 
	I1020 12:44:12.444047  290109 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:44:12.444135  290109 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:44:12.444194  290109 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:44:12.444203  290109 kubeadm.go:318] 
	I1020 12:44:12.444285  290109 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:44:12.444296  290109 kubeadm.go:318] 
	I1020 12:44:12.444336  290109 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:44:12.444342  290109 kubeadm.go:318] 
	I1020 12:44:12.444405  290109 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:44:12.444512  290109 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:44:12.444606  290109 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:44:12.444615  290109 kubeadm.go:318] 
	I1020 12:44:12.444727  290109 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:44:12.444854  290109 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:44:12.444862  290109 kubeadm.go:318] 
	I1020 12:44:12.444982  290109 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zldgui.lrpvkfzs6byfp132 \
	I1020 12:44:12.445139  290109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:44:12.445162  290109 kubeadm.go:318] 	--control-plane 
	I1020 12:44:12.445167  290109 kubeadm.go:318] 
	I1020 12:44:12.445286  290109 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:44:12.445297  290109 kubeadm.go:318] 
	I1020 12:44:12.445420  290109 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zldgui.lrpvkfzs6byfp132 \
	I1020 12:44:12.445588  290109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:44:12.445599  290109 cni.go:84] Creating CNI manager for "kindnet"
	I1020 12:44:12.447104  290109 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1020 12:44:10.474446  282174 pod_ready.go:104] pod "coredns-66bc5c9577-vpzk5" is not "Ready", error: <nil>
	I1020 12:44:11.474426  282174 pod_ready.go:94] pod "coredns-66bc5c9577-vpzk5" is "Ready"
	I1020 12:44:11.474455  282174 pod_ready.go:86] duration metric: took 37.006911205s for pod "coredns-66bc5c9577-vpzk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.477456  282174 pod_ready.go:83] waiting for pod "etcd-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.482241  282174 pod_ready.go:94] pod "etcd-embed-certs-907116" is "Ready"
	I1020 12:44:11.482264  282174 pod_ready.go:86] duration metric: took 4.783797ms for pod "etcd-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.484429  282174 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.488616  282174 pod_ready.go:94] pod "kube-apiserver-embed-certs-907116" is "Ready"
	I1020 12:44:11.488637  282174 pod_ready.go:86] duration metric: took 4.185977ms for pod "kube-apiserver-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.490659  282174 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.672972  282174 pod_ready.go:94] pod "kube-controller-manager-embed-certs-907116" is "Ready"
	I1020 12:44:11.673006  282174 pod_ready.go:86] duration metric: took 182.327383ms for pod "kube-controller-manager-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:11.871013  282174 pod_ready.go:83] waiting for pod "kube-proxy-s2xbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.270962  282174 pod_ready.go:94] pod "kube-proxy-s2xbv" is "Ready"
	I1020 12:44:12.270992  282174 pod_ready.go:86] duration metric: took 399.955657ms for pod "kube-proxy-s2xbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.471348  282174 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.870964  282174 pod_ready.go:94] pod "kube-scheduler-embed-certs-907116" is "Ready"
	I1020 12:44:12.870988  282174 pod_ready.go:86] duration metric: took 399.618167ms for pod "kube-scheduler-embed-certs-907116" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:44:12.870999  282174 pod_ready.go:40] duration metric: took 38.406812384s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:44:12.924876  282174 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:44:12.926672  282174 out.go:179] * Done! kubectl is now configured to use "embed-certs-907116" cluster and "default" namespace by default
	I1020 12:44:08.277351  294742 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:44:08.277377  294742 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:44:08.277386  294742 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 12:44:08.277494  294742 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-312375 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1020 12:44:08.277576  294742 ssh_runner.go:195] Run: crio config
	I1020 12:44:08.355011  294742 cni.go:84] Creating CNI manager for "calico"
	I1020 12:44:08.355055  294742 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:44:08.355085  294742 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-312375 NodeName:calico-312375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:44:08.355265  294742 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-312375"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:44:08.355336  294742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:44:08.367940  294742 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:44:08.368021  294742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:44:08.386983  294742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1020 12:44:08.409010  294742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:44:08.431202  294742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1020 12:44:08.455323  294742 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:44:08.460990  294742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:44:08.476996  294742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:08.597660  294742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:44:08.632247  294742 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375 for IP: 192.168.85.2
	I1020 12:44:08.632283  294742 certs.go:195] generating shared ca certs ...
	I1020 12:44:08.632303  294742 certs.go:227] acquiring lock for ca certs: {Name:mk4c7da99de5f33cefa0fd11c12000fda55ac4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:08.632459  294742 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key
	I1020 12:44:08.632522  294742 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key
	I1020 12:44:08.632532  294742 certs.go:257] generating profile certs ...
	I1020 12:44:08.632601  294742 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.key
	I1020 12:44:08.632619  294742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.crt with IP's: []
	I1020 12:44:09.202850  294742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.crt ...
	I1020 12:44:09.202877  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.crt: {Name:mkbdec429d4cbda4fb9bc977f19afd051ce3355d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.203074  294742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.key ...
	I1020 12:44:09.203085  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/client.key: {Name:mk7ff7f7b99fe7d84ed5cb3c6639b23b253ed35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.203168  294742 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f
	I1020 12:44:09.203183  294742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1020 12:44:09.266475  294742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f ...
	I1020 12:44:09.266510  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f: {Name:mk0fabf3fcd389c49d8e41b45fc5dcfbc97753e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.266711  294742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f ...
	I1020 12:44:09.266738  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f: {Name:mk87d98dfeeb2648e041ef4287a4b054cbeaeb28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.266886  294742 certs.go:382] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt.98b9624f -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt
	I1020 12:44:09.267005  294742 certs.go:386] copying /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key.98b9624f -> /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key
	I1020 12:44:09.267115  294742 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key
	I1020 12:44:09.267136  294742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt with IP's: []
	I1020 12:44:09.828233  294742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt ...
	I1020 12:44:09.828260  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt: {Name:mkbdd621c1182dfd2366cefca902df57d087dc5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.828470  294742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key ...
	I1020 12:44:09.828486  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key: {Name:mkec8558969c7bf7b65ab79964cbf5f89003acf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:09.828714  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem (1338 bytes)
	W1020 12:44:09.828765  294742 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592_empty.pem, impossibly tiny 0 bytes
	I1020 12:44:09.828794  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 12:44:09.828825  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:44:09.828856  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:44:09.828888  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/certs/key.pem (1679 bytes)
	I1020 12:44:09.828943  294742 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem (1708 bytes)
	I1020 12:44:09.829578  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:44:09.848292  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1020 12:44:09.866601  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:44:09.884359  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1020 12:44:09.903577  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 12:44:09.921254  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:44:09.939055  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:44:09.957033  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/calico-312375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:44:09.975799  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1338 bytes)
	I1020 12:44:09.995242  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/ssl/certs/145922.pem --> /usr/share/ca-certificates/145922.pem (1708 bytes)
	I1020 12:44:10.013648  294742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:44:10.035020  294742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:44:10.050006  294742 ssh_runner.go:195] Run: openssl version
	I1020 12:44:10.057233  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I1020 12:44:10.068165  294742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I1020 12:44:10.073118  294742 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:02 /usr/share/ca-certificates/14592.pem
	I1020 12:44:10.073176  294742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I1020 12:44:10.123459  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/51391683.0"
	I1020 12:44:10.135609  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145922.pem && ln -fs /usr/share/ca-certificates/145922.pem /etc/ssl/certs/145922.pem"
	I1020 12:44:10.147033  294742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145922.pem
	I1020 12:44:10.151755  294742 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:02 /usr/share/ca-certificates/145922.pem
	I1020 12:44:10.151824  294742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145922.pem
	I1020 12:44:10.198624  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145922.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:44:10.209926  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:44:10.220167  294742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:44:10.224551  294742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:44:10.224621  294742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:44:10.269750  294742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:44:10.280228  294742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:44:10.284865  294742 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:44:10.284926  294742 kubeadm.go:400] StartCluster: {Name:calico-312375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-312375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:44:10.285003  294742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:44:10.285062  294742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:44:10.316297  294742 cri.go:89] found id: ""
	I1020 12:44:10.316373  294742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:44:10.326187  294742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:44:10.335518  294742 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:44:10.335580  294742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:44:10.344858  294742 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:44:10.344898  294742 kubeadm.go:157] found existing configuration files:
	
	I1020 12:44:10.344954  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:44:10.354104  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:44:10.354165  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:44:10.363401  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:44:10.372401  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:44:10.372470  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:44:10.381702  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:44:10.391079  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:44:10.391140  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:44:10.400839  294742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:44:10.410272  294742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:44:10.410337  294742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:44:10.420211  294742 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:44:10.493661  294742 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1020 12:44:10.565558  294742 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:44:12.448195  290109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:44:12.452640  290109 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:44:12.452672  290109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:44:12.466669  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:44:12.692968  290109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:44:12.693100  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:12.693195  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-312375 minikube.k8s.io/updated_at=2025_10_20T12_44_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=kindnet-312375 minikube.k8s.io/primary=true
	I1020 12:44:12.706185  290109 ops.go:34] apiserver oom_adj: -16
	I1020 12:44:12.806651  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:13.307466  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:13.806959  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:14.307287  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:14.806956  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:15.307599  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:15.807545  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:16.306793  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:16.806878  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:17.307479  290109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:17.406027  290109 kubeadm.go:1113] duration metric: took 4.712979593s to wait for elevateKubeSystemPrivileges
	I1020 12:44:17.406067  290109 kubeadm.go:402] duration metric: took 17.573398912s to StartCluster
	I1020 12:44:17.406098  290109 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:17.406177  290109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:44:17.408438  290109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:17.408697  290109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:44:17.408718  290109 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:44:17.409152  290109 config.go:182] Loaded profile config "kindnet-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:17.409124  290109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:44:17.409214  290109 addons.go:69] Setting storage-provisioner=true in profile "kindnet-312375"
	I1020 12:44:17.409232  290109 addons.go:238] Setting addon storage-provisioner=true in "kindnet-312375"
	I1020 12:44:17.409261  290109 host.go:66] Checking if "kindnet-312375" exists ...
	I1020 12:44:17.409276  290109 addons.go:69] Setting default-storageclass=true in profile "kindnet-312375"
	I1020 12:44:17.409294  290109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-312375"
	I1020 12:44:17.409624  290109 cli_runner.go:164] Run: docker container inspect kindnet-312375 --format={{.State.Status}}
	I1020 12:44:17.409808  290109 cli_runner.go:164] Run: docker container inspect kindnet-312375 --format={{.State.Status}}
	I1020 12:44:17.412199  290109 out.go:179] * Verifying Kubernetes components...
	I1020 12:44:17.414746  290109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:17.436653  290109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:44:17.437180  290109 addons.go:238] Setting addon default-storageclass=true in "kindnet-312375"
	I1020 12:44:17.437225  290109 host.go:66] Checking if "kindnet-312375" exists ...
	I1020 12:44:17.437692  290109 cli_runner.go:164] Run: docker container inspect kindnet-312375 --format={{.State.Status}}
	I1020 12:44:17.438151  290109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:17.438173  290109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:44:17.438229  290109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-312375
	I1020 12:44:17.465568  290109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:17.465592  290109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:44:17.465651  290109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-312375
	I1020 12:44:17.465986  290109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kindnet-312375/id_rsa Username:docker}
	I1020 12:44:17.487518  290109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/kindnet-312375/id_rsa Username:docker}
	I1020 12:44:17.504894  290109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:44:17.569976  290109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:44:17.587395  290109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:17.604809  290109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:17.684900  290109 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1020 12:44:17.686623  290109 node_ready.go:35] waiting up to 15m0s for node "kindnet-312375" to be "Ready" ...
	I1020 12:44:17.906437  290109 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:44:17.907707  290109 addons.go:514] duration metric: took 498.582147ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:44:18.188900  290109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-312375" context rescaled to 1 replicas
	I1020 12:44:20.251338  294742 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:44:20.251408  294742 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:44:20.251516  294742 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:44:20.251595  294742 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1020 12:44:20.251647  294742 kubeadm.go:318] OS: Linux
	I1020 12:44:20.251718  294742 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:44:20.251814  294742 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:44:20.251885  294742 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:44:20.251980  294742 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:44:20.252121  294742 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:44:20.252176  294742 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:44:20.252218  294742 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:44:20.252261  294742 kubeadm.go:318] CGROUPS_IO: enabled
	I1020 12:44:20.252326  294742 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:44:20.252432  294742 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:44:20.252562  294742 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:44:20.252654  294742 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:44:20.254331  294742 out.go:252]   - Generating certificates and keys ...
	I1020 12:44:20.254421  294742 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:44:20.254523  294742 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:44:20.254619  294742 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:44:20.254709  294742 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:44:20.254834  294742 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:44:20.254912  294742 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:44:20.255011  294742 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:44:20.255126  294742 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-312375 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:44:20.255173  294742 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:44:20.255289  294742 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-312375 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 12:44:20.255421  294742 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:44:20.255484  294742 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:44:20.255542  294742 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:44:20.255603  294742 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:44:20.255650  294742 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:44:20.255703  294742 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:44:20.255752  294742 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:44:20.255849  294742 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:44:20.255909  294742 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:44:20.255979  294742 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:44:20.256036  294742 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:44:20.258256  294742 out.go:252]   - Booting up control plane ...
	I1020 12:44:20.258376  294742 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:44:20.258491  294742 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:44:20.258594  294742 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:44:20.258738  294742 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:44:20.258881  294742 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:44:20.259008  294742 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:44:20.259099  294742 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:44:20.259136  294742 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:44:20.259260  294742 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:44:20.259364  294742 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:44:20.259415  294742 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001801365s
	I1020 12:44:20.259511  294742 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:44:20.259624  294742 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1020 12:44:20.259763  294742 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:44:20.259919  294742 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:44:20.259993  294742 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.445791212s
	I1020 12:44:20.260051  294742 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.923123487s
	I1020 12:44:20.260120  294742 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502011455s
	I1020 12:44:20.260231  294742 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:44:20.260368  294742 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:44:20.260465  294742 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:44:20.260759  294742 kubeadm.go:318] [mark-control-plane] Marking the node calico-312375 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:44:20.260868  294742 kubeadm.go:318] [bootstrap-token] Using token: tjsqif.gnw9gi313y3h01f3
	I1020 12:44:20.262298  294742 out.go:252]   - Configuring RBAC rules ...
	I1020 12:44:20.262400  294742 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:44:20.262478  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:44:20.262600  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:44:20.262747  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:44:20.262886  294742 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:44:20.262960  294742 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:44:20.263105  294742 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:44:20.263169  294742 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:44:20.263218  294742 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:44:20.263228  294742 kubeadm.go:318] 
	I1020 12:44:20.263304  294742 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:44:20.263314  294742 kubeadm.go:318] 
	I1020 12:44:20.263427  294742 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:44:20.263449  294742 kubeadm.go:318] 
	I1020 12:44:20.263489  294742 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:44:20.263584  294742 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:44:20.263659  294742 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:44:20.263668  294742 kubeadm.go:318] 
	I1020 12:44:20.263745  294742 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:44:20.263755  294742 kubeadm.go:318] 
	I1020 12:44:20.263853  294742 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:44:20.263864  294742 kubeadm.go:318] 
	I1020 12:44:20.263938  294742 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:44:20.264058  294742 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:44:20.264131  294742 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:44:20.264137  294742 kubeadm.go:318] 
	I1020 12:44:20.264211  294742 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:44:20.264281  294742 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:44:20.264286  294742 kubeadm.go:318] 
	I1020 12:44:20.264381  294742 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tjsqif.gnw9gi313y3h01f3 \
	I1020 12:44:20.264485  294742 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e \
	I1020 12:44:20.264506  294742 kubeadm.go:318] 	--control-plane 
	I1020 12:44:20.264525  294742 kubeadm.go:318] 
	I1020 12:44:20.264603  294742 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:44:20.264609  294742 kubeadm.go:318] 
	I1020 12:44:20.264686  294742 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tjsqif.gnw9gi313y3h01f3 \
	I1020 12:44:20.264811  294742 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:320d71c03f38b4dc3ce52bdd14d1516dfbd360aae9303ba3c3c8021999d9f72e 
	I1020 12:44:20.264823  294742 cni.go:84] Creating CNI manager for "calico"
	I1020 12:44:20.266493  294742 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1020 12:44:20.269071  294742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:44:20.269092  294742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1020 12:44:20.284385  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:44:21.102888  294742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:44:21.102967  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:21.103016  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-312375 minikube.k8s.io/updated_at=2025_10_20T12_44_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=calico-312375 minikube.k8s.io/primary=true
	I1020 12:44:21.115039  294742 ops.go:34] apiserver oom_adj: -16
	I1020 12:44:21.176324  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:21.676988  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:22.177282  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:22.676977  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:23.176821  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1020 12:44:19.690228  290109 node_ready.go:57] node "kindnet-312375" has "Ready":"False" status (will retry)
	W1020 12:44:21.690294  290109 node_ready.go:57] node "kindnet-312375" has "Ready":"False" status (will retry)
	I1020 12:44:23.677177  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:24.176970  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:24.676553  294742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:44:24.744631  294742 kubeadm.go:1113] duration metric: took 3.641723429s to wait for elevateKubeSystemPrivileges
	I1020 12:44:24.744673  294742 kubeadm.go:402] duration metric: took 14.459752641s to StartCluster
	I1020 12:44:24.744691  294742 settings.go:142] acquiring lock: {Name:mk53f067c07152946a0f1a9be9dab5aef0554b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:24.744752  294742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:44:24.746545  294742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/kubeconfig: {Name:mk01cf2fbd00945f5fceb1254ffb3b0948f80d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:44:24.746805  294742 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:44:24.746869  294742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:44:24.746869  294742 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:44:24.746967  294742 addons.go:69] Setting storage-provisioner=true in profile "calico-312375"
	I1020 12:44:24.746988  294742 addons.go:238] Setting addon storage-provisioner=true in "calico-312375"
	I1020 12:44:24.747025  294742 host.go:66] Checking if "calico-312375" exists ...
	I1020 12:44:24.747029  294742 addons.go:69] Setting default-storageclass=true in profile "calico-312375"
	I1020 12:44:24.747054  294742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-312375"
	I1020 12:44:24.747052  294742 config.go:182] Loaded profile config "calico-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:44:24.747420  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:24.747600  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:24.748658  294742 out.go:179] * Verifying Kubernetes components...
	I1020 12:44:24.750265  294742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:44:24.770805  294742 addons.go:238] Setting addon default-storageclass=true in "calico-312375"
	I1020 12:44:24.770848  294742 host.go:66] Checking if "calico-312375" exists ...
	I1020 12:44:24.771020  294742 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:44:24.771246  294742 cli_runner.go:164] Run: docker container inspect calico-312375 --format={{.State.Status}}
	I1020 12:44:24.772464  294742 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:24.772484  294742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:44:24.772534  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:24.799721  294742 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:24.799747  294742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:44:24.799831  294742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-312375
	I1020 12:44:24.802375  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:24.823968  294742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/calico-312375/id_rsa Username:docker}
	I1020 12:44:24.832124  294742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:44:24.886570  294742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:44:24.918328  294742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:44:24.937681  294742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:44:25.007568  294742 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 12:44:25.009316  294742 node_ready.go:35] waiting up to 15m0s for node "calico-312375" to be "Ready" ...
	I1020 12:44:25.248074  294742 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:44:25.249082  294742 addons.go:514] duration metric: took 502.213908ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:44:25.512711  294742 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-312375" context rescaled to 1 replicas
	W1020 12:44:27.013066  294742 node_ready.go:57] node "calico-312375" has "Ready":"False" status (will retry)
	W1020 12:44:24.191048  290109 node_ready.go:57] node "kindnet-312375" has "Ready":"False" status (will retry)
	W1020 12:44:26.689842  290109 node_ready.go:57] node "kindnet-312375" has "Ready":"False" status (will retry)
	I1020 12:44:28.190163  290109 node_ready.go:49] node "kindnet-312375" is "Ready"
	I1020 12:44:28.190189  290109 node_ready.go:38] duration metric: took 10.503537725s for node "kindnet-312375" to be "Ready" ...
	I1020 12:44:28.190203  290109 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:44:28.190239  290109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:44:28.202372  290109 api_server.go:72] duration metric: took 10.793620901s to wait for apiserver process to appear ...
	I1020 12:44:28.202402  290109 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:44:28.202423  290109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1020 12:44:28.206823  290109 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1020 12:44:28.208035  290109 api_server.go:141] control plane version: v1.34.1
	I1020 12:44:28.208079  290109 api_server.go:131] duration metric: took 5.671177ms to wait for apiserver health ...
	I1020 12:44:28.208088  290109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:44:28.211553  290109 system_pods.go:59] 8 kube-system pods found
	I1020 12:44:28.211590  290109 system_pods.go:61] "coredns-66bc5c9577-c5ncd" [0e83621b-c3b1-45f3-8aad-bd3b70ba9460] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:44:28.211597  290109 system_pods.go:61] "etcd-kindnet-312375" [6aaa5eab-c18c-4a74-8e75-6594f06fa457] Running
	I1020 12:44:28.211604  290109 system_pods.go:61] "kindnet-lgpcv" [6b02e28d-303a-41dd-8e77-99e1e17c7502] Running
	I1020 12:44:28.211609  290109 system_pods.go:61] "kube-apiserver-kindnet-312375" [22e4f40f-34c0-452a-be6b-bd687617ab9a] Running
	I1020 12:44:28.211614  290109 system_pods.go:61] "kube-controller-manager-kindnet-312375" [7ae2196e-4de8-40bd-b621-4a5613a21b65] Running
	I1020 12:44:28.211622  290109 system_pods.go:61] "kube-proxy-jr2z7" [bd314319-690d-4a4a-98fb-81fe6ab699e8] Running
	I1020 12:44:28.211627  290109 system_pods.go:61] "kube-scheduler-kindnet-312375" [2db54588-e0fb-42b3-be59-1609bb77c115] Running
	I1020 12:44:28.211638  290109 system_pods.go:61] "storage-provisioner" [77c8538f-7c23-432c-b954-92e881d9654c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:44:28.211646  290109 system_pods.go:74] duration metric: took 3.5519ms to wait for pod list to return data ...
	I1020 12:44:28.211656  290109 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:44:28.215894  290109 default_sa.go:45] found service account: "default"
	I1020 12:44:28.215917  290109 default_sa.go:55] duration metric: took 4.254021ms for default service account to be created ...
	I1020 12:44:28.215928  290109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:44:28.219739  290109 system_pods.go:86] 8 kube-system pods found
	I1020 12:44:28.219787  290109 system_pods.go:89] "coredns-66bc5c9577-c5ncd" [0e83621b-c3b1-45f3-8aad-bd3b70ba9460] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:44:28.219796  290109 system_pods.go:89] "etcd-kindnet-312375" [6aaa5eab-c18c-4a74-8e75-6594f06fa457] Running
	I1020 12:44:28.219804  290109 system_pods.go:89] "kindnet-lgpcv" [6b02e28d-303a-41dd-8e77-99e1e17c7502] Running
	I1020 12:44:28.219811  290109 system_pods.go:89] "kube-apiserver-kindnet-312375" [22e4f40f-34c0-452a-be6b-bd687617ab9a] Running
	I1020 12:44:28.219818  290109 system_pods.go:89] "kube-controller-manager-kindnet-312375" [7ae2196e-4de8-40bd-b621-4a5613a21b65] Running
	I1020 12:44:28.219823  290109 system_pods.go:89] "kube-proxy-jr2z7" [bd314319-690d-4a4a-98fb-81fe6ab699e8] Running
	I1020 12:44:28.219827  290109 system_pods.go:89] "kube-scheduler-kindnet-312375" [2db54588-e0fb-42b3-be59-1609bb77c115] Running
	I1020 12:44:28.219834  290109 system_pods.go:89] "storage-provisioner" [77c8538f-7c23-432c-b954-92e881d9654c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:44:28.219856  290109 retry.go:31] will retry after 274.030385ms: missing components: kube-dns
	I1020 12:44:28.500008  290109 system_pods.go:86] 8 kube-system pods found
	I1020 12:44:28.500055  290109 system_pods.go:89] "coredns-66bc5c9577-c5ncd" [0e83621b-c3b1-45f3-8aad-bd3b70ba9460] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:44:28.500085  290109 system_pods.go:89] "etcd-kindnet-312375" [6aaa5eab-c18c-4a74-8e75-6594f06fa457] Running
	I1020 12:44:28.500093  290109 system_pods.go:89] "kindnet-lgpcv" [6b02e28d-303a-41dd-8e77-99e1e17c7502] Running
	I1020 12:44:28.500099  290109 system_pods.go:89] "kube-apiserver-kindnet-312375" [22e4f40f-34c0-452a-be6b-bd687617ab9a] Running
	I1020 12:44:28.500105  290109 system_pods.go:89] "kube-controller-manager-kindnet-312375" [7ae2196e-4de8-40bd-b621-4a5613a21b65] Running
	I1020 12:44:28.500115  290109 system_pods.go:89] "kube-proxy-jr2z7" [bd314319-690d-4a4a-98fb-81fe6ab699e8] Running
	I1020 12:44:28.500120  290109 system_pods.go:89] "kube-scheduler-kindnet-312375" [2db54588-e0fb-42b3-be59-1609bb77c115] Running
	I1020 12:44:28.500127  290109 system_pods.go:89] "storage-provisioner" [77c8538f-7c23-432c-b954-92e881d9654c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:44:28.500143  290109 retry.go:31] will retry after 255.098141ms: missing components: kube-dns
	I1020 12:44:28.761712  290109 system_pods.go:86] 8 kube-system pods found
	I1020 12:44:28.761747  290109 system_pods.go:89] "coredns-66bc5c9577-c5ncd" [0e83621b-c3b1-45f3-8aad-bd3b70ba9460] Running
	I1020 12:44:28.761754  290109 system_pods.go:89] "etcd-kindnet-312375" [6aaa5eab-c18c-4a74-8e75-6594f06fa457] Running
	I1020 12:44:28.761759  290109 system_pods.go:89] "kindnet-lgpcv" [6b02e28d-303a-41dd-8e77-99e1e17c7502] Running
	I1020 12:44:28.761764  290109 system_pods.go:89] "kube-apiserver-kindnet-312375" [22e4f40f-34c0-452a-be6b-bd687617ab9a] Running
	I1020 12:44:28.761799  290109 system_pods.go:89] "kube-controller-manager-kindnet-312375" [7ae2196e-4de8-40bd-b621-4a5613a21b65] Running
	I1020 12:44:28.761806  290109 system_pods.go:89] "kube-proxy-jr2z7" [bd314319-690d-4a4a-98fb-81fe6ab699e8] Running
	I1020 12:44:28.761812  290109 system_pods.go:89] "kube-scheduler-kindnet-312375" [2db54588-e0fb-42b3-be59-1609bb77c115] Running
	I1020 12:44:28.761817  290109 system_pods.go:89] "storage-provisioner" [77c8538f-7c23-432c-b954-92e881d9654c] Running
	I1020 12:44:28.761826  290109 system_pods.go:126] duration metric: took 545.891641ms to wait for k8s-apps to be running ...
	I1020 12:44:28.761837  290109 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:44:28.761890  290109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:44:28.776470  290109 system_svc.go:56] duration metric: took 14.623305ms WaitForService to wait for kubelet
	I1020 12:44:28.776504  290109 kubeadm.go:586] duration metric: took 11.367757708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:44:28.776525  290109 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:44:28.780002  290109 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1020 12:44:28.780032  290109 node_conditions.go:123] node cpu capacity is 8
	I1020 12:44:28.780047  290109 node_conditions.go:105] duration metric: took 3.515302ms to run NodePressure ...
	I1020 12:44:28.780059  290109 start.go:241] waiting for startup goroutines ...
	I1020 12:44:28.780066  290109 start.go:246] waiting for cluster config update ...
	I1020 12:44:28.780076  290109 start.go:255] writing updated cluster config ...
	I1020 12:44:28.780335  290109 ssh_runner.go:195] Run: rm -f paused
	I1020 12:44:28.785005  290109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	
	
	==> CRI-O <==
	Oct 20 12:43:58 embed-certs-907116 crio[570]: time="2025-10-20T12:43:58.197851192Z" level=info msg="Started container" PID=1764 containerID=ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper id=e31c43bb-063b-4610-9be7-b0bc3fd8f733 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90011cc03ba47a375594dbf61912e2481623b1f883674afc397ae5b761f0bbdd
	Oct 20 12:43:58 embed-certs-907116 crio[570]: time="2025-10-20T12:43:58.266236995Z" level=info msg="Removing container: 075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e" id=58298785-8705-4433-a00a-87fab5bd4053 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:43:58 embed-certs-907116 crio[570]: time="2025-10-20T12:43:58.283248039Z" level=info msg="Removed container 075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=58298785-8705-4433-a00a-87fab5bd4053 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.285273955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cca4ef92-a93c-495d-9a12-b9944b59d359 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.286837765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ab438d4a-0e8a-4f4c-ba5f-95200ac8bd6b name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.288669212Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=851ba572-c974-496c-a99e-0c9172565a3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.288819131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.293851634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.294066413Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32df45fc59921da7bd95c993752f0ac3cb7a35e101ef6848510b9dd88bbe3e29/merged/etc/passwd: no such file or directory"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.294105577Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32df45fc59921da7bd95c993752f0ac3cb7a35e101ef6848510b9dd88bbe3e29/merged/etc/group: no such file or directory"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.294850571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.324758978Z" level=info msg="Created container 8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95: kube-system/storage-provisioner/storage-provisioner" id=851ba572-c974-496c-a99e-0c9172565a3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.32556516Z" level=info msg="Starting container: 8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95" id=ede38d2f-f86e-4bdb-8f53-8bc91bb276f9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:44:04 embed-certs-907116 crio[570]: time="2025-10-20T12:44:04.32792419Z" level=info msg="Started container" PID=1778 containerID=8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95 description=kube-system/storage-provisioner/storage-provisioner id=ede38d2f-f86e-4bdb-8f53-8bc91bb276f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=83fad4c5d6e9b8987a656f19af4557cc7b5171129bf9b4088834ea96f49e4483
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.146714053Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=01186580-ad71-4914-907d-953a8c756316 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.14770439Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5bbdc968-3c29-49cf-b405-85fd4d684faf name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.148793943Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=544f2f18-addf-47ea-bb81-a657e998c7e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.148949924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.154468673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.154989711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.178971346Z" level=info msg="Created container d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=544f2f18-addf-47ea-bb81-a657e998c7e1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.17973574Z" level=info msg="Starting container: d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277" id=5c582472-891d-4a7d-b1bb-712240090a34 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.182021807Z" level=info msg="Started container" PID=1814 containerID=d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper id=5c582472-891d-4a7d-b1bb-712240090a34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90011cc03ba47a375594dbf61912e2481623b1f883674afc397ae5b761f0bbdd
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.345210969Z" level=info msg="Removing container: ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608" id=3eac7815-a8ee-4947-93fb-9fe7bdc4bde0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:44:23 embed-certs-907116 crio[570]: time="2025-10-20T12:44:23.354872936Z" level=info msg="Removed container ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps/dashboard-metrics-scraper" id=3eac7815-a8ee-4947-93fb-9fe7bdc4bde0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d552900c00f66       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   90011cc03ba47       dashboard-metrics-scraper-6ffb444bf9-qsxps   kubernetes-dashboard
	8d38623393b88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   83fad4c5d6e9b       storage-provisioner                          kube-system
	d1e0d8719fc2a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago      Running             kubernetes-dashboard        0                   2c16c2ec4274e       kubernetes-dashboard-855c9754f9-hm4nh        kubernetes-dashboard
	15ef78b953e81       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   69a8e7af48f96       coredns-66bc5c9577-vpzk5                     kube-system
	c783e004bdaed       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   3cdf8b8f17e38       busybox                                      default
	43d66af915825       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   db646b1767e51       kindnet-24g82                                kube-system
	9f6137e79a6af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   8d8ed7b2b862b       kube-proxy-s2xbv                             kube-system
	e624948cc12c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   83fad4c5d6e9b       storage-provisioner                          kube-system
	71b8d519c8fcf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   47b0b7aeb88d6       etcd-embed-certs-907116                      kube-system
	b16a54394efdc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   21f51af56dc9e       kube-controller-manager-embed-certs-907116   kube-system
	22cf3642d99bb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   ae7b39f737646       kube-apiserver-embed-certs-907116            kube-system
	c4cc4d9df25ab       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   83effa6393e5a       kube-scheduler-embed-certs-907116            kube-system
	
	
	==> coredns [15ef78b953e819a004b34f819cf429a261c9139ff8a41f2f50eede4db5a65bde] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53685 - 3538 "HINFO IN 7592767640842329698.2329089204509032448. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.41492157s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-907116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-907116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=embed-certs-907116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-907116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:44:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:44:03 +0000   Mon, 20 Oct 2025 12:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-907116
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6a5dfc3b-6ef1-4198-ad94-963e2bd73b87
	  Boot ID:                    344fc459-8017-49c7-b080-e1ea46b92a7d
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-vpzk5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-907116                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-24g82                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-907116             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-907116    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-s2xbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-907116             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qsxps    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hm4nh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-907116 event: Registered Node embed-certs-907116 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-907116 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-907116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-907116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-907116 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-907116 event: Registered Node embed-certs-907116 in Controller
	
	
	==> dmesg <==
	[  +0.097138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028199] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.576885] kauditd_printk_skb: 47 callbacks suppressed
	[Oct20 11:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.034556] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023926] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.024873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +1.023006] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +2.047802] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +4.031698] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[  +8.447395] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[ +16.382889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	[Oct20 12:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 8b b5 5f 27 8d 7e 11 94 34 e7 62 08 00
	
	
	==> etcd [71b8d519c8fcfa7da65604dd578e2dc4d11fc1ca223185cae0a2ce646c90d777] <==
	{"level":"warn","ts":"2025-10-20T12:43:32.236019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.246675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.259876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.266712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.272917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.280352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.287380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.294593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.301381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.308239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.316004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.323511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.331155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.338501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.345268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.352430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.359481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.366763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.373550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.381539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.394720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.403268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.410673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:43:32.466723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50928","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:44:03.937327Z","caller":"traceutil/trace.go:172","msg":"trace[169735283] transaction","detail":"{read_only:false; response_revision:652; number_of_response:1; }","duration":"211.310798ms","start":"2025-10-20T12:44:03.725996Z","end":"2025-10-20T12:44:03.937307Z","steps":["trace[169735283] 'process raft request'  (duration: 211.155082ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:44:30 up  1:26,  0 user,  load average: 4.20, 3.63, 2.40
	Linux embed-certs-907116 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [43d66af915825ae45fb963115486c3a36542c4a768ce4d5fed2ff9bc19ed78cc] <==
	I1020 12:43:33.692641       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:43:33.692910       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 12:43:33.693102       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:43:33.693122       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:43:33.693146       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:43:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:43:33.990164       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:43:33.990288       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:43:33.990310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:43:33.990635       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 12:43:34.290707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:43:34.290741       1 metrics.go:72] Registering metrics
	I1020 12:43:34.290857       1 controller.go:711] "Syncing nftables rules"
	I1020 12:43:43.896846       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:43:43.896911       1 main.go:301] handling current node
	I1020 12:43:53.901074       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:43:53.901151       1 main.go:301] handling current node
	I1020 12:44:03.896725       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:44:03.896832       1 main.go:301] handling current node
	I1020 12:44:13.899001       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:44:13.899042       1 main.go:301] handling current node
	I1020 12:44:23.896908       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 12:44:23.896952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [22cf3642d99bbb980929d5d8e78116ccc79fbe6f90ed96694a1910e81f25dac6] <==
	I1020 12:43:32.945995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:43:32.946218       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 12:43:32.949198       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 12:43:32.949240       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:43:32.949256       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 12:43:32.949261       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:43:32.949205       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 12:43:32.949413       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 12:43:32.949593       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1020 12:43:32.954878       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 12:43:32.970398       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:43:32.975240       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:43:32.979823       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:43:33.240288       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 12:43:33.270297       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:43:33.273465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:43:33.273465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:43:33.297017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:43:33.303962       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:43:33.344952       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.63.255"}
	I1020 12:43:33.355395       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.195.75"}
	I1020 12:43:33.849466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:43:36.511725       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:43:36.711646       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:43:36.763003       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b16a54394efdcb6933a56df962f7f8423ae93b34d8452a6afc5f404b46da576e] <==
	I1020 12:43:36.308713       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 12:43:36.308724       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 12:43:36.308750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:43:36.308709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 12:43:36.309075       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 12:43:36.309098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:43:36.309129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:43:36.309144       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:43:36.309158       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:43:36.309186       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-907116"
	I1020 12:43:36.309240       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 12:43:36.310177       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 12:43:36.311862       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 12:43:36.313719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:43:36.313719       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 12:43:36.314858       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:43:36.316686       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:43:36.318885       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:43:36.318907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:43:36.319003       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:43:36.320073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:43:36.322291       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:43:36.324559       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 12:43:36.326814       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:43:36.334237       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9f6137e79a6af824320fb4d2c61c014d11280ad3d72aaf8477198b8a808bfe57] <==
	I1020 12:43:33.542103       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:43:33.608747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:43:33.709323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:43:33.709373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 12:43:33.709493       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:43:33.728366       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:43:33.728423       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:43:33.733993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:43:33.734439       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:43:33.734455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:43:33.735643       1 config.go:309] "Starting node config controller"
	I1020 12:43:33.735646       1 config.go:200] "Starting service config controller"
	I1020 12:43:33.735663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:43:33.735666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:43:33.735672       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:43:33.735763       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:43:33.735808       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:43:33.735844       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:43:33.735850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:43:33.836631       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 12:43:33.836654       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:43:33.836633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c4cc4d9df25ab88c844bc98d6506700dac4d75294815034c92cfa41e1ddb2d01] <==
	I1020 12:43:31.875545       1 serving.go:386] Generated self-signed cert in-memory
	I1020 12:43:32.910860       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:43:32.910892       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:43:32.917721       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:43:32.918259       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:43:32.918858       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 12:43:32.918886       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 12:43:32.918931       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:43:32.918941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:43:32.918959       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:43:32.918966       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:43:33.019219       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 12:43:33.019241       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:43:33.019218       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:43:40 embed-certs-907116 kubelet[731]: I1020 12:43:40.217971     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm4nh" podStartSLOduration=1.37196685 podStartE2EDuration="4.217944756s" podCreationTimestamp="2025-10-20 12:43:36 +0000 UTC" firstStartedPulling="2025-10-20 12:43:37.265008206 +0000 UTC m=+7.209263289" lastFinishedPulling="2025-10-20 12:43:40.110986123 +0000 UTC m=+10.055241195" observedRunningTime="2025-10-20 12:43:40.217437593 +0000 UTC m=+10.161692682" watchObservedRunningTime="2025-10-20 12:43:40.217944756 +0000 UTC m=+10.162199844"
	Oct 20 12:43:41 embed-certs-907116 kubelet[731]: I1020 12:43:41.155417     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 12:43:43 embed-certs-907116 kubelet[731]: I1020 12:43:43.216062     731 scope.go:117] "RemoveContainer" containerID="5aab2b11fd27e2090824fe95c2d8b6f4cb0e09435aee22f53cb71a38919a7bfe"
	Oct 20 12:43:44 embed-certs-907116 kubelet[731]: I1020 12:43:44.220597     731 scope.go:117] "RemoveContainer" containerID="5aab2b11fd27e2090824fe95c2d8b6f4cb0e09435aee22f53cb71a38919a7bfe"
	Oct 20 12:43:44 embed-certs-907116 kubelet[731]: I1020 12:43:44.220849     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:44 embed-certs-907116 kubelet[731]: E1020 12:43:44.221057     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:43:45 embed-certs-907116 kubelet[731]: I1020 12:43:45.226458     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:45 embed-certs-907116 kubelet[731]: E1020 12:43:45.226640     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:43:47 embed-certs-907116 kubelet[731]: I1020 12:43:47.235906     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:47 embed-certs-907116 kubelet[731]: E1020 12:43:47.236106     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: I1020 12:43:58.146468     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: I1020 12:43:58.264197     731 scope.go:117] "RemoveContainer" containerID="075f95d3449ac54262a3d59fdcfd45ce130224862575f912bbcf7189eed86d6e"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: I1020 12:43:58.264478     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:43:58 embed-certs-907116 kubelet[731]: E1020 12:43:58.264722     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:44:04 embed-certs-907116 kubelet[731]: I1020 12:44:04.284823     731 scope.go:117] "RemoveContainer" containerID="e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d"
	Oct 20 12:44:07 embed-certs-907116 kubelet[731]: I1020 12:44:07.237040     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:44:07 embed-certs-907116 kubelet[731]: E1020 12:44:07.237376     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: I1020 12:44:23.146092     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: I1020 12:44:23.341352     731 scope.go:117] "RemoveContainer" containerID="ae1697aec9a86e2c5f0bfb71e2d2c65e376e6d6dc43a90f5e8987b0ab761c608"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: I1020 12:44:23.341807     731 scope.go:117] "RemoveContainer" containerID="d552900c00f6673b2cbe4e7bd3a92fe95e581c59ad9e049938e311cfdf8dd277"
	Oct 20 12:44:23 embed-certs-907116 kubelet[731]: E1020 12:44:23.342051     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qsxps_kubernetes-dashboard(2698391e-2efd-4836-bfab-522e6715b48f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qsxps" podUID="2698391e-2efd-4836-bfab-522e6715b48f"
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 20 12:44:26 embed-certs-907116 systemd[1]: kubelet.service: Consumed 1.826s CPU time.
	
	
	==> kubernetes-dashboard [d1e0d8719fc2a02f1a574fface75a559d0703a7f0c071f3f9e982fe3484fee6e] <==
	2025/10/20 12:43:40 Using namespace: kubernetes-dashboard
	2025/10/20 12:43:40 Using in-cluster config to connect to apiserver
	2025/10/20 12:43:40 Using secret token for csrf signing
	2025/10/20 12:43:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 12:43:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 12:43:40 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 12:43:40 Generating JWE encryption key
	2025/10/20 12:43:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 12:43:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 12:43:40 Initializing JWE encryption key from synchronized object
	2025/10/20 12:43:40 Creating in-cluster Sidecar client
	2025/10/20 12:43:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:43:40 Serving insecurely on HTTP port: 9090
	2025/10/20 12:44:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 12:43:40 Starting overwatch
	
	
	==> storage-provisioner [8d38623393b88729921c4e30b52c75f00b003c405f1cfe26c42bfceddabd4e95] <==
	I1020 12:44:04.343286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 12:44:04.354099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 12:44:04.354155       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 12:44:04.356927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:07.813230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:12.073122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:15.671916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:18.726446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:21.748898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:21.753631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:44:21.753831       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 12:44:21.753966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e684f2b7-228c-4e12-97d9-985f6618132e", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-907116_1ab68dc1-6e36-4572-957a-3eff9ba52811 became leader
	I1020 12:44:21.754146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-907116_1ab68dc1-6e36-4572-957a-3eff9ba52811!
	W1020 12:44:21.755931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:21.759003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 12:44:21.854507       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-907116_1ab68dc1-6e36-4572-957a-3eff9ba52811!
	W1020 12:44:23.761882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:23.765787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:25.769631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:25.774914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:27.778033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:27.782034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:29.785250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:44:29.789468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e624948cc12c19f3af9a7254915b203473031c57f36bc03588d8688e77b1c89d] <==
	I1020 12:43:33.507428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:44:03.513225       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-907116 -n embed-certs-907116
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-907116 -n embed-certs-907116: exit status 2 (322.300225ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-907116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.64s)

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.68
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.82
22 TestOffline 60.19
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 154.33
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.43
48 TestAddons/StoppedEnableDisable 17.07
49 TestCertOptions 36.34
50 TestCertExpiration 218.1
52 TestForceSystemdFlag 32.22
53 TestForceSystemdEnv 41.49
55 TestKVMDriverInstallOrUpdate 0.56
59 TestErrorSpam/setup 23.11
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 5.71
63 TestErrorSpam/unpause 5.54
64 TestErrorSpam/stop 8.08
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 39.39
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.25
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.63
76 TestFunctional/serial/CacheCmd/cache/add_local 0.77
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 45.43
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.25
87 TestFunctional/serial/LogsFileCmd 1.26
88 TestFunctional/serial/InvalidService 4
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 6.16
92 TestFunctional/parallel/DryRun 0.37
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.94
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 34.46
102 TestFunctional/parallel/SSHCmd 0.61
103 TestFunctional/parallel/CpCmd 1.85
104 TestFunctional/parallel/MySQL 14.05
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.62
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
114 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.49
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.98
122 TestFunctional/parallel/ImageCommands/Setup 0.44
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.22
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
143 TestFunctional/parallel/ProfileCmd/profile_list 0.38
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
145 TestFunctional/parallel/MountCmd/any-port 5.75
146 TestFunctional/parallel/MountCmd/specific-port 2.05
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
151 TestFunctional/parallel/ServiceCmd/List 1.7
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 109.25
164 TestMultiControlPlane/serial/DeployApp 4.48
165 TestMultiControlPlane/serial/PingHostFromPods 0.92
166 TestMultiControlPlane/serial/AddWorkerNode 24.54
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
169 TestMultiControlPlane/serial/CopyFile 17.01
170 TestMultiControlPlane/serial/StopSecondaryNode 13.18
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.49
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.32
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.17
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 46.62
178 TestMultiControlPlane/serial/RestartCluster 57.26
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
180 TestMultiControlPlane/serial/AddSecondaryNode 75.6
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 39.15
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.16
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 27.98
211 TestKicCustomNetwork/use_default_bridge_network 23.99
212 TestKicExistingNetwork 25.5
213 TestKicCustomSubnet 24.53
214 TestKicStaticIP 25.58
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 47.25
219 TestMountStart/serial/StartWithMountFirst 5.64
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.51
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.16
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 91.4
231 TestMultiNode/serial/DeployApp2Nodes 3.87
232 TestMultiNode/serial/PingHostFrom2Pods 0.66
233 TestMultiNode/serial/AddNode 24.24
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.71
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 7.67
239 TestMultiNode/serial/RestartKeepsNodes 78.61
240 TestMultiNode/serial/DeleteNode 5.26
241 TestMultiNode/serial/StopMultiNode 30.34
242 TestMultiNode/serial/RestartMultiNode 51.73
243 TestMultiNode/serial/ValidateNameConflict 27.28
248 TestPreload 147.81
250 TestScheduledStopUnix 96.28
253 TestInsufficientStorage 9.62
254 TestRunningBinaryUpgrade 50.52
256 TestKubernetesUpgrade 396.14
257 TestMissingContainerUpgrade 71.65
259 TestStoppedBinaryUpgrade/Setup 0.43
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 36.76
262 TestStoppedBinaryUpgrade/Upgrade 57.4
263 TestNoKubernetes/serial/StartWithStopK8s 25.57
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
273 TestPause/serial/Start 42.46
274 TestNoKubernetes/serial/Start 10.8
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
276 TestNoKubernetes/serial/ProfileList 17.52
277 TestNoKubernetes/serial/Stop 1.26
281 TestNoKubernetes/serial/StartNoArgs 6.94
286 TestNetworkPlugins/group/false 3.2
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
291 TestPause/serial/SecondStartNoReconfiguration 11.88
294 TestStartStop/group/old-k8s-version/serial/FirstStart 51.34
296 TestStartStop/group/no-preload/serial/FirstStart 49.23
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.28
299 TestStartStop/group/old-k8s-version/serial/Stop 15.97
300 TestStartStop/group/no-preload/serial/DeployApp 8.24
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
302 TestStartStop/group/old-k8s-version/serial/SecondStart 45.14
304 TestStartStop/group/no-preload/serial/Stop 16.25
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/no-preload/serial/SecondStart 46.22
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.92
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
318 TestStartStop/group/newest-cni/serial/FirstStart 26.14
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
321 TestStartStop/group/embed-certs/serial/FirstStart 40.92
323 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.15
326 TestStartStop/group/newest-cni/serial/Stop 2.58
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/newest-cni/serial/SecondStart 10.79
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.06
335 TestNetworkPlugins/group/auto/Start 42.08
336 TestStartStop/group/embed-certs/serial/DeployApp 9.25
338 TestStartStop/group/embed-certs/serial/Stop 16.36
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/embed-certs/serial/SecondStart 49.62
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
342 TestNetworkPlugins/group/auto/KubeletFlags 0.3
343 TestNetworkPlugins/group/auto/NetCatPod 8.19
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
345 TestNetworkPlugins/group/auto/DNS 0.14
346 TestNetworkPlugins/group/auto/Localhost 0.12
347 TestNetworkPlugins/group/auto/HairPin 0.12
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
350 TestNetworkPlugins/group/kindnet/Start 41.69
351 TestNetworkPlugins/group/calico/Start 68.52
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/custom-flannel/Start 99.98
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.2
360 TestNetworkPlugins/group/kindnet/DNS 0.15
361 TestNetworkPlugins/group/kindnet/Localhost 0.11
362 TestNetworkPlugins/group/kindnet/HairPin 0.12
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/bridge/Start 61.44
365 TestNetworkPlugins/group/calico/KubeletFlags 0.44
366 TestNetworkPlugins/group/calico/NetCatPod 9.72
367 TestNetworkPlugins/group/calico/DNS 0.12
368 TestNetworkPlugins/group/calico/Localhost 0.1
369 TestNetworkPlugins/group/calico/HairPin 0.1
370 TestNetworkPlugins/group/flannel/Start 50.88
371 TestNetworkPlugins/group/enable-default-cni/Start 71.58
372 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
373 TestNetworkPlugins/group/bridge/NetCatPod 9.22
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
376 TestNetworkPlugins/group/bridge/DNS 0.11
377 TestNetworkPlugins/group/bridge/Localhost 0.09
378 TestNetworkPlugins/group/bridge/HairPin 0.1
379 TestNetworkPlugins/group/custom-flannel/DNS 0.11
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
384 TestNetworkPlugins/group/flannel/NetCatPod 9.19
385 TestNetworkPlugins/group/flannel/DNS 0.11
386 TestNetworkPlugins/group/flannel/Localhost 0.09
387 TestNetworkPlugins/group/flannel/HairPin 0.09
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (5.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-611429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-611429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.434918219s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1020 11:56:02.509926   14592 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1020 11:56:02.510018   14592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-611429
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-611429: exit status 85 (59.804069ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-611429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-611429 │ jenkins │ v1.37.0 │ 20 Oct 25 11:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:55:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:55:57.114079   14604 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:55:57.114166   14604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:55:57.114173   14604 out.go:374] Setting ErrFile to fd 2...
	I1020 11:55:57.114178   14604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:55:57.114391   14604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	W1020 11:55:57.114506   14604 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21773-11075/.minikube/config/config.json: open /home/jenkins/minikube-integration/21773-11075/.minikube/config/config.json: no such file or directory
	I1020 11:55:57.114978   14604 out.go:368] Setting JSON to true
	I1020 11:55:57.115831   14604 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2306,"bootTime":1760959051,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:55:57.115915   14604 start.go:141] virtualization: kvm guest
	I1020 11:55:57.119011   14604 out.go:99] [download-only-611429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 11:55:57.119139   14604 notify.go:220] Checking for updates...
	W1020 11:55:57.119144   14604 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball: no such file or directory
	I1020 11:55:57.120812   14604 out.go:171] MINIKUBE_LOCATION=21773
	I1020 11:55:57.122301   14604 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:55:57.123619   14604 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 11:55:57.124989   14604 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 11:55:57.126357   14604 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1020 11:55:57.128636   14604 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1020 11:55:57.128915   14604 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:55:57.153628   14604 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 11:55:57.153688   14604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:55:57.563642   14604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-20 11:55:57.551143483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:55:57.563749   14604 docker.go:318] overlay module found
	I1020 11:55:57.565567   14604 out.go:99] Using the docker driver based on user configuration
	I1020 11:55:57.565604   14604 start.go:305] selected driver: docker
	I1020 11:55:57.565613   14604 start.go:925] validating driver "docker" against <nil>
	I1020 11:55:57.565698   14604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:55:57.623262   14604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-20 11:55:57.61410024 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:55:57.623412   14604 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:55:57.623888   14604 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1020 11:55:57.624049   14604 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 11:55:57.625797   14604 out.go:171] Using Docker driver with root privileges
	I1020 11:55:57.627014   14604 cni.go:84] Creating CNI manager for ""
	I1020 11:55:57.627077   14604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 11:55:57.627089   14604 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 11:55:57.627169   14604 start.go:349] cluster config:
	{Name:download-only-611429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-611429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 11:55:57.628540   14604 out.go:99] Starting "download-only-611429" primary control-plane node in "download-only-611429" cluster
	I1020 11:55:57.628560   14604 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 11:55:57.629823   14604 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1020 11:55:57.629845   14604 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 11:55:57.629955   14604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 11:55:57.646246   14604 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 11:55:57.646429   14604 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 11:55:57.646539   14604 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 11:55:57.650474   14604 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1020 11:55:57.650500   14604 cache.go:58] Caching tarball of preloaded images
	I1020 11:55:57.650626   14604 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 11:55:57.652520   14604 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1020 11:55:57.652541   14604 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1020 11:55:57.674908   14604 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1020 11:55:57.675031   14604 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1020 11:56:01.055554   14604 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1020 11:56:01.055898   14604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/download-only-611429/config.json ...
	I1020 11:56:01.055926   14604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/download-only-611429/config.json: {Name:mk30862e581fb952cf3bb5f0b9f93fd62c827309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:56:01.056104   14604 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 11:56:01.056303   14604 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21773-11075/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-611429 host does not exist
	  To start a cluster, run: "minikube start -p download-only-611429"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-611429
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-877202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-877202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.675842884s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1020 11:56:06.599154   14592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1020 11:56:06.599218   14592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-11075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-877202
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-877202: exit status 85 (60.301403ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-611429 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-611429 │ jenkins │ v1.37.0 │ 20 Oct 25 11:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ delete  │ -p download-only-611429                                                                                                                                                   │ download-only-611429 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │ 20 Oct 25 11:56 UTC │
	│ start   │ -o=json --download-only -p download-only-877202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-877202 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:56:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:56:02.962806   14958 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:56:02.963069   14958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:02.963079   14958 out.go:374] Setting ErrFile to fd 2...
	I1020 11:56:02.963084   14958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:02.963262   14958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 11:56:02.963789   14958 out.go:368] Setting JSON to true
	I1020 11:56:02.964629   14958 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2312,"bootTime":1760959051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:56:02.964717   14958 start.go:141] virtualization: kvm guest
	I1020 11:56:02.966631   14958 out.go:99] [download-only-877202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 11:56:02.966786   14958 notify.go:220] Checking for updates...
	I1020 11:56:02.968266   14958 out.go:171] MINIKUBE_LOCATION=21773
	I1020 11:56:02.969730   14958 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:56:02.971006   14958 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 11:56:02.972049   14958 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 11:56:02.973407   14958 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1020 11:56:02.976063   14958 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1020 11:56:02.976347   14958 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:56:02.998547   14958 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 11:56:02.998620   14958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:56:03.053549   14958 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-20 11:56:03.044419519 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:56:03.053647   14958 docker.go:318] overlay module found
	I1020 11:56:03.055305   14958 out.go:99] Using the docker driver based on user configuration
	I1020 11:56:03.055326   14958 start.go:305] selected driver: docker
	I1020 11:56:03.055331   14958 start.go:925] validating driver "docker" against <nil>
	I1020 11:56:03.055410   14958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 11:56:03.112118   14958 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-20 11:56:03.10343583 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 11:56:03.112301   14958 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:56:03.112781   14958 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1020 11:56:03.112940   14958 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 11:56:03.114808   14958 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-877202 host does not exist
	  To start a cluster, run: "minikube start -p download-only-877202"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-877202
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-175079 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-175079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-175079
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1020 11:56:07.699286   14592 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-253279 --alsologtostderr --binary-mirror http://127.0.0.1:42287 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-253279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-253279
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (60.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-014915 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-014915 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (57.630301662s)
helpers_test.go:175: Cleaning up "offline-crio-014915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-014915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-014915: (2.563014873s)
--- PASS: TestOffline (60.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-053741
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-053741: exit status 85 (55.443669ms)

                                                
                                                
-- stdout --
	* Profile "addons-053741" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-053741"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-053741
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-053741: exit status 85 (54.50146ms)

                                                
                                                
-- stdout --
	* Profile "addons-053741" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-053741"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-053741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-053741 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.326841972s)
--- PASS: TestAddons/Setup (154.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-053741 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-053741 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-053741 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-053741 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4d37d1ce-93e9-4ffd-ae7d-4730ac0bf5cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4d37d1ce-93e9-4ffd-ae7d-4730ac0bf5cf] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00380237s
addons_test.go:694: (dbg) Run:  kubectl --context addons-053741 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-053741 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-053741 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (17.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-053741
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-053741: (16.820584968s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-053741
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-053741
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-053741
--- PASS: TestAddons/StoppedEnableDisable (17.07s)

                                                
                                    
x
+
TestCertOptions (36.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-418869 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-418869 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.118516716s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-418869 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-418869 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-418869 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-418869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-418869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-418869: (2.476938196s)
--- PASS: TestCertOptions (36.34s)

                                                
                                    
x
+
TestCertExpiration (218.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-365628 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-365628 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.483126568s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-365628 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.966372416s)
helpers_test.go:175: Cleaning up "cert-expiration-365628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-365628
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-365628: (2.647305704s)
--- PASS: TestCertExpiration (218.10s)

                                                
                                    
x
+
TestForceSystemdFlag (32.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-670413 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-670413 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.457864447s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-670413 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-670413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-670413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-670413: (2.453447223s)
--- PASS: TestForceSystemdFlag (32.22s)

                                                
                                    
x
+
TestForceSystemdEnv (41.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-104936 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-104936 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.02238779s)
helpers_test.go:175: Cleaning up "force-systemd-env-104936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-104936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-104936: (2.464155358s)
--- PASS: TestForceSystemdEnv (41.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1020 12:38:35.869660   14592 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1020 12:38:35.869884   14592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3156273520/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1020 12:38:35.899294   14592 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3156273520/001/docker-machine-driver-kvm2 version is 1.1.1
W1020 12:38:35.899339   14592 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1020 12:38:35.899463   14592 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1020 12:38:35.899511   14592 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3156273520/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                    
x
+
TestErrorSpam/setup (23.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-680679 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-680679 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-680679 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-680679 --driver=docker  --container-runtime=crio: (23.109789842s)
--- PASS: TestErrorSpam/setup (23.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (5.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause: exit status 80 (2.082818913s)

                                                
                                                
-- stdout --
	* Pausing node nospam-680679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:02:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause: exit status 80 (1.657058307s)

                                                
                                                
-- stdout --
	* Pausing node nospam-680679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:02:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause: exit status 80 (1.968142829s)

                                                
                                                
-- stdout --
	* Pausing node nospam-680679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause: exit status 80 (2.243905498s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-680679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:02:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause: exit status 80 (1.922965424s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-680679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:02:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause: exit status 80 (1.376708116s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-680679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.54s)

                                                
                                    
x
+
TestErrorSpam/stop (8.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 stop: (7.89137946s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-680679 --log_dir /tmp/nospam-680679 stop
--- PASS: TestErrorSpam/stop (8.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21773-11075/.minikube/files/etc/test/nested/copy/14592/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012564 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-012564 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.384758909s)
--- PASS: TestFunctional/serial/StartWithProxy (39.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1020 12:03:20.711578   14592 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012564 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-012564 --alsologtostderr -v=8: (6.245998729s)
functional_test.go:678: soft start took 6.246786801s for "functional-012564" cluster.
I1020 12:03:26.958037   14592 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-012564 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-012564 /tmp/TestFunctionalserialCacheCmdcacheadd_local1925742035/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cache add minikube-local-cache-test:functional-012564
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cache delete minikube-local-cache-test:functional-012564
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-012564
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.664404ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 kubectl -- --context functional-012564 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-012564 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012564 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1020 12:03:43.475162   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:43.481585   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:43.492987   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:43.514444   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:43.555992   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:43.637427   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:43.798982   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:44.120657   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:44.762707   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:46.044241   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:48.607177   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:03:53.729353   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:04:03.971573   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-012564 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.432596006s)
functional_test.go:776: restart took 45.432709519s for "functional-012564" cluster.
I1020 12:04:18.183894   14592 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (45.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-012564 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-012564 logs: (1.249109973s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 logs --file /tmp/TestFunctionalserialLogsFileCmd4064875709/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-012564 logs --file /tmp/TestFunctionalserialLogsFileCmd4064875709/001/logs.txt: (1.262394605s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-012564 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-012564
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-012564: exit status 115 (336.639075ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31996 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-012564 delete -f testdata/invalidsvc.yaml
E1020 12:04:24.453377   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 config get cpus: exit status 14 (64.369946ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 config get cpus: exit status 14 (57.780098ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012564 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012564 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 53003: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (159.329994ms)

                                                
                                                
-- stdout --
	* [functional-012564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:04:46.756163   52497 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:04:46.756441   52497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:46.756452   52497 out.go:374] Setting ErrFile to fd 2...
	I1020 12:04:46.756456   52497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:46.756677   52497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:04:46.757156   52497 out.go:368] Setting JSON to false
	I1020 12:04:46.758256   52497 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2836,"bootTime":1760959051,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:04:46.758351   52497 start.go:141] virtualization: kvm guest
	I1020 12:04:46.762160   52497 out.go:179] * [functional-012564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:04:46.763711   52497 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:04:46.763716   52497 notify.go:220] Checking for updates...
	I1020 12:04:46.766437   52497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:04:46.767857   52497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:04:46.769557   52497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:04:46.770940   52497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:04:46.772332   52497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:04:46.774101   52497 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:04:46.774619   52497 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:04:46.798900   52497 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:04:46.799001   52497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:04:46.856996   52497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-20 12:04:46.846022195 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:04:46.857111   52497 docker.go:318] overlay module found
	I1020 12:04:46.858937   52497 out.go:179] * Using the docker driver based on existing profile
	I1020 12:04:46.860248   52497 start.go:305] selected driver: docker
	I1020 12:04:46.860263   52497 start.go:925] validating driver "docker" against &{Name:functional-012564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012564 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:04:46.860369   52497 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:04:46.862168   52497 out.go:203] 
	W1020 12:04:46.863608   52497 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1020 12:04:46.865296   52497 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012564 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (155.539556ms)

                                                
                                                
-- stdout --
	* [functional-012564] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:04:47.128631   52737 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:04:47.128886   52737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:47.128893   52737 out.go:374] Setting ErrFile to fd 2...
	I1020 12:04:47.128898   52737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:04:47.129221   52737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:04:47.129674   52737 out.go:368] Setting JSON to false
	I1020 12:04:47.130617   52737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2836,"bootTime":1760959051,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:04:47.130710   52737 start.go:141] virtualization: kvm guest
	I1020 12:04:47.132579   52737 out.go:179] * [functional-012564] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1020 12:04:47.134144   52737 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:04:47.134129   52737 notify.go:220] Checking for updates...
	I1020 12:04:47.136817   52737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:04:47.138149   52737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:04:47.139582   52737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:04:47.140963   52737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:04:47.142338   52737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:04:47.143925   52737 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:04:47.144431   52737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:04:47.170585   52737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:04:47.170731   52737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:04:47.227964   52737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-20 12:04:47.218095466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:04:47.228077   52737 docker.go:318] overlay module found
	I1020 12:04:47.229949   52737 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1020 12:04:47.231160   52737 start.go:305] selected driver: docker
	I1020 12:04:47.231174   52737 start.go:925] validating driver "docker" against &{Name:functional-012564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012564 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:04:47.231256   52737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:04:47.233333   52737 out.go:203] 
	W1020 12:04:47.234835   52737 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1020 12:04:47.236175   52737 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4da6fec9-1787-4d35-a8e4-6453476bfe62] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004353504s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-012564 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-012564 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-012564 get pvc myclaim -o=json
I1020 12:04:31.864392   14592 retry.go:31] will retry after 2.775833193s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3c555765-df16-4249-a2c4-b031be9be888 ResourceVersion:674 Generation:0 CreationTimestamp:2025-10-20 12:04:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001d6aa60 VolumeMode:0xc001d6aa70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-012564 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-012564 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-012564 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-012564 apply -f testdata/storage-provisioner/pod.yaml
I1020 12:04:43.153405   14592 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [58f36a32-2b12-41b7-a395-314b15b15b52] Pending
helpers_test.go:352: "sp-pod" [58f36a32-2b12-41b7-a395-314b15b15b52] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [58f36a32-2b12-41b7-a395-314b15b15b52] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004202202s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-012564 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-012564 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-012564 apply -f testdata/storage-provisioner/pod.yaml
I1020 12:04:52.983321   14592 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bce5835c-ec28-4059-acab-5300d12b4d40] Pending
2025/10/20 12:04:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [bce5835c-ec28-4059-acab-5300d12b4d40] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [bce5835c-ec28-4059-acab-5300d12b4d40] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003328344s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-012564 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh -n functional-012564 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cp functional-012564:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1639164455/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh -n functional-012564 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh -n functional-012564 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (14.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-012564 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7kthf" [02f9a2da-2aca-43e0-b7cf-2ea94b724890] Pending
helpers_test.go:352: "mysql-5bb876957f-7kthf" [02f9a2da-2aca-43e0-b7cf-2ea94b724890] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-7kthf" [02f9a2da-2aca-43e0-b7cf-2ea94b724890] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003065792s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-012564 exec mysql-5bb876957f-7kthf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-012564 exec mysql-5bb876957f-7kthf -- mysql -ppassword -e "show databases;": exit status 1 (87.797816ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1020 12:05:08.541293   14592 retry.go:31] will retry after 706.600426ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-012564 exec mysql-5bb876957f-7kthf -- mysql -ppassword -e "show databases;"
E1020 12:06:27.336471   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:08:43.468439   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:09:11.177928   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:43.468403   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (14.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14592/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /etc/test/nested/copy/14592/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /etc/ssl/certs/14592.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /usr/share/ca-certificates/14592.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /etc/ssl/certs/145922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /usr/share/ca-certificates/145922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-012564 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh "sudo systemctl is-active docker": exit status 1 (319.986698ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh "sudo systemctl is-active containerd": exit status 1 (324.487556ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012564 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012564 image ls --format short --alsologtostderr:
I1020 12:05:01.669204   54626 out.go:360] Setting OutFile to fd 1 ...
I1020 12:05:01.669460   54626 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:01.669469   54626 out.go:374] Setting ErrFile to fd 2...
I1020 12:05:01.669473   54626 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:01.669693   54626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
I1020 12:05:01.670297   54626 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:01.670390   54626 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:01.670766   54626 cli_runner.go:164] Run: docker container inspect functional-012564 --format={{.State.Status}}
I1020 12:05:01.689947   54626 ssh_runner.go:195] Run: systemctl --version
I1020 12:05:01.690012   54626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012564
I1020 12:05:01.708660   54626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/functional-012564/id_rsa Username:docker}
I1020 12:05:01.808660   54626 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012564 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ localhost/my-image                      │ functional-012564  │ 549644bebfe90 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012564 image ls --format table --alsologtostderr:
I1020 12:05:05.522720   55405 out.go:360] Setting OutFile to fd 1 ...
I1020 12:05:05.523026   55405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:05.523036   55405 out.go:374] Setting ErrFile to fd 2...
I1020 12:05:05.523040   55405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:05.523250   55405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
I1020 12:05:05.523854   55405 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:05.523943   55405 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:05.524304   55405 cli_runner.go:164] Run: docker container inspect functional-012564 --format={{.State.Status}}
I1020 12:05:05.543575   55405 ssh_runner.go:195] Run: systemctl --version
I1020 12:05:05.543626   55405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012564
I1020 12:05:05.564162   55405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/functional-012564/id_rsa Username:docker}
I1020 12:05:05.664286   55405 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls --format json --alsologtostderr
E1020 12:05:05.414694   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012564 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"61b9d2ff5d2511b12f60b999487ad0c9c5f9809cb3019adb50a005b141c7abea","repoDigests":["docker.io/library/8024efc5ce58dfdfde1c063d792330977a24e34e5c428f9f603f50754135ab7d-tmp@sha256:6673d19fa722e89807c43e9972d32bcc5ecb0a461edda208f284de629fc60a41"],"repoTags":[],"size":"1466132"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"07ccdb7838758e758a4d52a9761636c385125a3
27355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-
provisioner:v5"],"size":"31470524"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e513925
24dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"549644bebfe9051bae77740c9ffddd532544f48a720fe269f708f1da0fc0c9af
","repoDigests":["localhost/my-image@sha256:5c75e763bb619e6fc7a142afd21398941c9997813f042e7e760e648cf120d2d4"],"repoTags":["localhost/my-image:functional-012564"],"size":"1468744"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f0
7a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry
.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012564 image ls --format json --alsologtostderr:
I1020 12:05:05.270305   55353 out.go:360] Setting OutFile to fd 1 ...
I1020 12:05:05.270539   55353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:05.270546   55353 out.go:374] Setting ErrFile to fd 2...
I1020 12:05:05.270550   55353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:05.270789   55353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
I1020 12:05:05.271402   55353 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:05.271501   55353 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:05.271898   55353 cli_runner.go:164] Run: docker container inspect functional-012564 --format={{.State.Status}}
I1020 12:05:05.292093   55353 ssh_runner.go:195] Run: systemctl --version
I1020 12:05:05.292137   55353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012564
I1020 12:05:05.310140   55353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/functional-012564/id_rsa Username:docker}
I1020 12:05:05.410415   55353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012564 image ls --format yaml --alsologtostderr:
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012564 image ls --format yaml --alsologtostderr:
I1020 12:05:02.065511   54733 out.go:360] Setting OutFile to fd 1 ...
I1020 12:05:02.065756   54733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:02.065764   54733 out.go:374] Setting ErrFile to fd 2...
I1020 12:05:02.065768   54733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:02.065968   54733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
I1020 12:05:02.066532   54733 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:02.066624   54733 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:02.067018   54733 cli_runner.go:164] Run: docker container inspect functional-012564 --format={{.State.Status}}
I1020 12:05:02.087219   54733 ssh_runner.go:195] Run: systemctl --version
I1020 12:05:02.087264   54733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012564
I1020 12:05:02.105581   54733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/functional-012564/id_rsa Username:docker}
I1020 12:05:02.205928   54733 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh pgrep buildkitd: exit status 1 (281.759225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image build -t localhost/my-image:functional-012564 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-012564 image build -t localhost/my-image:functional-012564 testdata/build --alsologtostderr: (2.436576916s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012564 image build -t localhost/my-image:functional-012564 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 61b9d2ff5d2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-012564
--> 549644bebfe
Successfully tagged localhost/my-image:functional-012564
549644bebfe9051bae77740c9ffddd532544f48a720fe269f708f1da0fc0c9af
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012564 image build -t localhost/my-image:functional-012564 testdata/build --alsologtostderr:
I1020 12:05:02.568025   54894 out.go:360] Setting OutFile to fd 1 ...
I1020 12:05:02.568330   54894 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:02.568341   54894 out.go:374] Setting ErrFile to fd 2...
I1020 12:05:02.568346   54894 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:05:02.568520   54894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
I1020 12:05:02.569123   54894 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:02.569709   54894 config.go:182] Loaded profile config "functional-012564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:05:02.570095   54894 cli_runner.go:164] Run: docker container inspect functional-012564 --format={{.State.Status}}
I1020 12:05:02.589636   54894 ssh_runner.go:195] Run: systemctl --version
I1020 12:05:02.589711   54894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012564
I1020 12:05:02.609954   54894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/functional-012564/id_rsa Username:docker}
I1020 12:05:02.709620   54894 build_images.go:161] Building image from path: /tmp/build.529811860.tar
I1020 12:05:02.709695   54894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1020 12:05:02.718163   54894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.529811860.tar
I1020 12:05:02.721889   54894 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.529811860.tar: stat -c "%s %y" /var/lib/minikube/build/build.529811860.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.529811860.tar': No such file or directory
I1020 12:05:02.721920   54894 ssh_runner.go:362] scp /tmp/build.529811860.tar --> /var/lib/minikube/build/build.529811860.tar (3072 bytes)
I1020 12:05:02.740187   54894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.529811860
I1020 12:05:02.748283   54894 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.529811860 -xf /var/lib/minikube/build/build.529811860.tar
I1020 12:05:02.757185   54894 crio.go:315] Building image: /var/lib/minikube/build/build.529811860
I1020 12:05:02.757280   54894 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-012564 /var/lib/minikube/build/build.529811860 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1020 12:05:04.934429   54894 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-012564 /var/lib/minikube/build/build.529811860 --cgroup-manager=cgroupfs: (2.177103504s)
I1020 12:05:04.934492   54894 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.529811860
I1020 12:05:04.943458   54894 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.529811860.tar
I1020 12:05:04.951711   54894 build_images.go:217] Built localhost/my-image:functional-012564 from /tmp/build.529811860.tar
I1020 12:05:04.951745   54894 build_images.go:133] succeeded building to: functional-012564
I1020 12:05:04.951750   54894 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-012564
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012564 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012564 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-012564 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 47570: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-012564 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012564 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-012564 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [67a5be4c-352f-45d7-8847-3acb633118a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [67a5be4c-352f-45d7-8847-3acb633118a3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003565938s
I1020 12:04:34.684947   14592 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image rm kicbase/echo-server:functional-012564 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-012564 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
I1020 12:04:34.699496   14592 retry.go:31] will retry after 2.211552914s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3c555765-df16-4249-a2c4-b031be9be888 ResourceVersion:674 Generation:0 CreationTimestamp:2025-10-20 12:04:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001d6b190 VolumeMode:0xc001d6b1a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.12.166 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-012564 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "333.243891ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.229846ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "334.775004ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "48.978489ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdany-port4241575931/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760961876952916620" to /tmp/TestFunctionalparallelMountCmdany-port4241575931/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760961876952916620" to /tmp/TestFunctionalparallelMountCmdany-port4241575931/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760961876952916620" to /tmp/TestFunctionalparallelMountCmdany-port4241575931/001/test-1760961876952916620
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T /mount-9p | grep 9p"
I1020 12:04:36.975333   14592 retry.go:31] will retry after 5.997006946s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3c555765-df16-4249-a2c4-b031be9be888 ResourceVersion:674 Generation:0 CreationTimestamp:2025-10-20 12:04:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c885f0 VolumeMode:0xc001c88600 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.971409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:04:37.229194   14592 retry.go:31] will retry after 594.091999ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 20 12:04 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 20 12:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 20 12:04 test-1760961876952916620
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh cat /mount-9p/test-1760961876952916620
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-012564 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [56c78424-a09a-43c4-943a-ed6932fe7111] Pending
helpers_test.go:352: "busybox-mount" [56c78424-a09a-43c4-943a-ed6932fe7111] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [56c78424-a09a-43c4-943a-ed6932fe7111] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [56c78424-a09a-43c4-943a-ed6932fe7111] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003036089s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-012564 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdany-port4241575931/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdspecific-port4004947825/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.888411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:04:42.985873   14592 retry.go:31] will retry after 596.852188ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdspecific-port4004947825/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh "sudo umount -f /mount-9p": exit status 1 (325.895251ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-012564 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdspecific-port4004947825/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781375578/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781375578/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781375578/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T" /mount1: exit status 1 (386.489798ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:04:45.142347   14592 retry.go:31] will retry after 724.457174ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-012564 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781375578/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781375578/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012564 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781375578/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-012564 service list: (1.700157441s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-012564 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-012564 service list -o json: (1.685827348s)
functional_test.go:1504: Took "1.685952373s" to run "out/minikube-linux-amd64 -p functional-012564 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-012564
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-012564
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-012564
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (109.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m48.533892883s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (109.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 kubectl -- rollout status deployment/busybox: (2.738026641s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-5kgkg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-8rk2q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-ws6nq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-5kgkg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-8rk2q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-ws6nq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-5kgkg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-8rk2q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-ws6nq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-5kgkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-5kgkg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-8rk2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-8rk2q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-ws6nq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 kubectl -- exec busybox-7b57f96db7-ws6nq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 node add --alsologtostderr -v 5: (23.671072054s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-901966 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp testdata/cp-test.txt ha-901966:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1590577566/001/cp-test_ha-901966.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966:/home/docker/cp-test.txt ha-901966-m02:/home/docker/cp-test_ha-901966_ha-901966-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test_ha-901966_ha-901966-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966:/home/docker/cp-test.txt ha-901966-m03:/home/docker/cp-test_ha-901966_ha-901966-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test_ha-901966_ha-901966-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966:/home/docker/cp-test.txt ha-901966-m04:/home/docker/cp-test_ha-901966_ha-901966-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test_ha-901966_ha-901966-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp testdata/cp-test.txt ha-901966-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1590577566/001/cp-test_ha-901966-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m02:/home/docker/cp-test.txt ha-901966:/home/docker/cp-test_ha-901966-m02_ha-901966.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test_ha-901966-m02_ha-901966.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m02:/home/docker/cp-test.txt ha-901966-m03:/home/docker/cp-test_ha-901966-m02_ha-901966-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test_ha-901966-m02_ha-901966-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m02:/home/docker/cp-test.txt ha-901966-m04:/home/docker/cp-test_ha-901966-m02_ha-901966-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test_ha-901966-m02_ha-901966-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp testdata/cp-test.txt ha-901966-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1590577566/001/cp-test_ha-901966-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m03:/home/docker/cp-test.txt ha-901966:/home/docker/cp-test_ha-901966-m03_ha-901966.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test_ha-901966-m03_ha-901966.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m03:/home/docker/cp-test.txt ha-901966-m02:/home/docker/cp-test_ha-901966-m03_ha-901966-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test_ha-901966-m03_ha-901966-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m03:/home/docker/cp-test.txt ha-901966-m04:/home/docker/cp-test_ha-901966-m03_ha-901966-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test_ha-901966-m03_ha-901966-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp testdata/cp-test.txt ha-901966-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1590577566/001/cp-test_ha-901966-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m04:/home/docker/cp-test.txt ha-901966:/home/docker/cp-test_ha-901966-m04_ha-901966.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966 "sudo cat /home/docker/cp-test_ha-901966-m04_ha-901966.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m04:/home/docker/cp-test.txt ha-901966-m02:/home/docker/cp-test_ha-901966-m04_ha-901966-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m02 "sudo cat /home/docker/cp-test_ha-901966-m04_ha-901966-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 cp ha-901966-m04:/home/docker/cp-test.txt ha-901966-m03:/home/docker/cp-test_ha-901966-m04_ha-901966-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 ssh -n ha-901966-m03 "sudo cat /home/docker/cp-test_ha-901966-m04_ha-901966-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 node stop m02 --alsologtostderr -v 5: (12.480960607s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5: exit status 7 (699.224701ms)

                                                
                                                
-- stdout --
	ha-901966
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901966-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901966-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901966-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:17:26.364695   79751 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:17:26.365146   79751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:17:26.365154   79751 out.go:374] Setting ErrFile to fd 2...
	I1020 12:17:26.365159   79751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:17:26.365370   79751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:17:26.365537   79751 out.go:368] Setting JSON to false
	I1020 12:17:26.365562   79751 mustload.go:65] Loading cluster: ha-901966
	I1020 12:17:26.365716   79751 notify.go:220] Checking for updates...
	I1020 12:17:26.366033   79751 config.go:182] Loaded profile config "ha-901966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:17:26.366049   79751 status.go:174] checking status of ha-901966 ...
	I1020 12:17:26.366529   79751 cli_runner.go:164] Run: docker container inspect ha-901966 --format={{.State.Status}}
	I1020 12:17:26.386298   79751 status.go:371] ha-901966 host status = "Running" (err=<nil>)
	I1020 12:17:26.386336   79751 host.go:66] Checking if "ha-901966" exists ...
	I1020 12:17:26.386653   79751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901966
	I1020 12:17:26.405093   79751 host.go:66] Checking if "ha-901966" exists ...
	I1020 12:17:26.405363   79751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:17:26.405413   79751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901966
	I1020 12:17:26.423828   79751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/ha-901966/id_rsa Username:docker}
	I1020 12:17:26.522405   79751 ssh_runner.go:195] Run: systemctl --version
	I1020 12:17:26.529023   79751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:17:26.541937   79751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:17:26.601405   79751 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-20 12:17:26.590095171 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:17:26.601939   79751 kubeconfig.go:125] found "ha-901966" server: "https://192.168.49.254:8443"
	I1020 12:17:26.601971   79751 api_server.go:166] Checking apiserver status ...
	I1020 12:17:26.602020   79751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:17:26.613688   79751 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup
	W1020 12:17:26.622569   79751 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:17:26.622629   79751 ssh_runner.go:195] Run: ls
	I1020 12:17:26.626543   79751 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1020 12:17:26.632199   79751 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1020 12:17:26.632222   79751 status.go:463] ha-901966 apiserver status = Running (err=<nil>)
	I1020 12:17:26.632232   79751 status.go:176] ha-901966 status: &{Name:ha-901966 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:17:26.632252   79751 status.go:174] checking status of ha-901966-m02 ...
	I1020 12:17:26.632518   79751 cli_runner.go:164] Run: docker container inspect ha-901966-m02 --format={{.State.Status}}
	I1020 12:17:26.650590   79751 status.go:371] ha-901966-m02 host status = "Stopped" (err=<nil>)
	I1020 12:17:26.650616   79751 status.go:384] host is not running, skipping remaining checks
	I1020 12:17:26.650623   79751 status.go:176] ha-901966-m02 status: &{Name:ha-901966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:17:26.650660   79751 status.go:174] checking status of ha-901966-m03 ...
	I1020 12:17:26.650931   79751 cli_runner.go:164] Run: docker container inspect ha-901966-m03 --format={{.State.Status}}
	I1020 12:17:26.669327   79751 status.go:371] ha-901966-m03 host status = "Running" (err=<nil>)
	I1020 12:17:26.669351   79751 host.go:66] Checking if "ha-901966-m03" exists ...
	I1020 12:17:26.669621   79751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901966-m03
	I1020 12:17:26.689011   79751 host.go:66] Checking if "ha-901966-m03" exists ...
	I1020 12:17:26.689259   79751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:17:26.689295   79751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901966-m03
	I1020 12:17:26.707273   79751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/ha-901966-m03/id_rsa Username:docker}
	I1020 12:17:26.805504   79751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:17:26.820082   79751 kubeconfig.go:125] found "ha-901966" server: "https://192.168.49.254:8443"
	I1020 12:17:26.820112   79751 api_server.go:166] Checking apiserver status ...
	I1020 12:17:26.820152   79751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:17:26.831299   79751 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	W1020 12:17:26.840400   79751 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:17:26.840469   79751 ssh_runner.go:195] Run: ls
	I1020 12:17:26.844594   79751 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1020 12:17:26.848590   79751 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1020 12:17:26.848610   79751 status.go:463] ha-901966-m03 apiserver status = Running (err=<nil>)
	I1020 12:17:26.848618   79751 status.go:176] ha-901966-m03 status: &{Name:ha-901966-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:17:26.848632   79751 status.go:174] checking status of ha-901966-m04 ...
	I1020 12:17:26.848905   79751 cli_runner.go:164] Run: docker container inspect ha-901966-m04 --format={{.State.Status}}
	I1020 12:17:26.868318   79751 status.go:371] ha-901966-m04 host status = "Running" (err=<nil>)
	I1020 12:17:26.868342   79751 host.go:66] Checking if "ha-901966-m04" exists ...
	I1020 12:17:26.868581   79751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901966-m04
	I1020 12:17:26.886803   79751 host.go:66] Checking if "ha-901966-m04" exists ...
	I1020 12:17:26.887073   79751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:17:26.887107   79751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901966-m04
	I1020 12:17:26.906356   79751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/ha-901966-m04/id_rsa Username:docker}
	I1020 12:17:27.004441   79751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:17:27.017509   79751 status.go:176] ha-901966-m04 status: &{Name:ha-901966-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 node start m02 --alsologtostderr -v 5: (13.501027994s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 stop --alsologtostderr -v 5: (45.430397406s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 start --wait true --alsologtostderr -v 5
E1020 12:18:43.468512   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.370953   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.377381   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.388864   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.410269   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.451670   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.533148   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:25.695421   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:26.016943   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:26.658748   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:27.940717   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 start --wait true --alsologtostderr -v 5: (1m1.781024063s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node delete m03 --alsologtostderr -v 5
E1020 12:19:30.502200   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:35.624273   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 node delete m03 --alsologtostderr -v 5: (9.312821285s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (46.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 stop --alsologtostderr -v 5
E1020 12:19:45.865699   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:20:06.348023   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:20:06.539462   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 stop --alsologtostderr -v 5: (46.511445405s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5: exit status 7 (106.002682ms)

                                                
                                                
-- stdout --
	ha-901966
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901966-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901966-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:27.892526   94110 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:27.892821   94110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:27.892831   94110 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:27.892835   94110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:27.893100   94110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:20:27.893323   94110 out.go:368] Setting JSON to false
	I1020 12:20:27.893353   94110 mustload.go:65] Loading cluster: ha-901966
	I1020 12:20:27.893409   94110 notify.go:220] Checking for updates...
	I1020 12:20:27.893782   94110 config.go:182] Loaded profile config "ha-901966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:27.893798   94110 status.go:174] checking status of ha-901966 ...
	I1020 12:20:27.894278   94110 cli_runner.go:164] Run: docker container inspect ha-901966 --format={{.State.Status}}
	I1020 12:20:27.915016   94110 status.go:371] ha-901966 host status = "Stopped" (err=<nil>)
	I1020 12:20:27.915037   94110 status.go:384] host is not running, skipping remaining checks
	I1020 12:20:27.915043   94110 status.go:176] ha-901966 status: &{Name:ha-901966 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:20:27.915100   94110 status.go:174] checking status of ha-901966-m02 ...
	I1020 12:20:27.915402   94110 cli_runner.go:164] Run: docker container inspect ha-901966-m02 --format={{.State.Status}}
	I1020 12:20:27.933960   94110 status.go:371] ha-901966-m02 host status = "Stopped" (err=<nil>)
	I1020 12:20:27.933988   94110 status.go:384] host is not running, skipping remaining checks
	I1020 12:20:27.933993   94110 status.go:176] ha-901966-m02 status: &{Name:ha-901966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:20:27.934013   94110 status.go:174] checking status of ha-901966-m04 ...
	I1020 12:20:27.934268   94110 cli_runner.go:164] Run: docker container inspect ha-901966-m04 --format={{.State.Status}}
	I1020 12:20:27.952341   94110 status.go:371] ha-901966-m04 host status = "Stopped" (err=<nil>)
	I1020 12:20:27.952364   94110 status.go:384] host is not running, skipping remaining checks
	I1020 12:20:27.952370   94110 status.go:176] ha-901966-m04 status: &{Name:ha-901966-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (46.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1020 12:20:47.310268   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.410221319s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 node add --control-plane --alsologtostderr -v 5
E1020 12:22:09.232524   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-901966 node add --control-plane --alsologtostderr -v 5: (1m14.711808305s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-901966 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-795500 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-795500 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.146470202s)
--- PASS: TestJSONOutput/start/Command (39.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-795500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-795500 --output=json --user=testUser: (6.164323656s)
--- PASS: TestJSONOutput/stop/Command (6.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-258150 --memory=3072 --output=json --wait=true --driver=fail
E1020 12:23:43.468978   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-258150 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (68.855772ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c73578ac-d894-4537-b664-37fe0c61b670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-258150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ded11868-4003-44f3-ae10-34f6cd74312b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21773"}}
	{"specversion":"1.0","id":"a1990c29-ea66-4cd1-ac1a-1f336dff2e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"af4164d6-2223-42ca-9a95-3df9985da93b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig"}}
	{"specversion":"1.0","id":"dab8e1d3-0bcc-44b8-ab86-0fba563eaa69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube"}}
	{"specversion":"1.0","id":"a8560ff0-9d37-4465-a7b4-a4873e9c4e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3fa4a3eb-dfa1-44a9-81aa-e3f53eb9a864","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ab0e42b-04c8-4d0e-b0a1-dc9c642558d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-258150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-258150
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-282647 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-282647 --network=: (25.81925643s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-282647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-282647
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-282647: (2.139335901s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-753984 --network=bridge
E1020 12:24:25.372077   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-753984 --network=bridge: (21.968784149s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-753984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-753984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-753984: (1.998807751s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.99s)

                                                
                                    
x
+
TestKicExistingNetwork (25.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1020 12:24:35.614405   14592 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1020 12:24:35.631835   14592 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1020 12:24:35.631919   14592 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1020 12:24:35.631935   14592 cli_runner.go:164] Run: docker network inspect existing-network
W1020 12:24:35.649320   14592 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1020 12:24:35.649354   14592 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1020 12:24:35.649368   14592 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1020 12:24:35.649504   14592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1020 12:24:35.667033   14592 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b5ff940911b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:a6:30:3f:56:08} reservation:<nil>}
I1020 12:24:35.667368   14592 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed2ea0}
I1020 12:24:35.667389   14592 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1020 12:24:35.667432   14592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1020 12:24:35.724210   14592 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-237281 --network=existing-network
E1020 12:24:53.079854   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-237281 --network=existing-network: (23.339748068s)
helpers_test.go:175: Cleaning up "existing-network-237281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-237281
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-237281: (2.013165789s)
I1020 12:25:01.095212   14592 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.50s)

                                                
                                    
x
+
TestKicCustomSubnet (24.53s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-865078 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-865078 --subnet=192.168.60.0/24: (22.315440504s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-865078 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-865078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-865078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-865078: (2.193068111s)
--- PASS: TestKicCustomSubnet (24.53s)

                                                
                                    
x
+
TestKicStaticIP (25.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-168526 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-168526 --static-ip=192.168.200.200: (23.27272722s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-168526 ip
helpers_test.go:175: Cleaning up "static-ip-168526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-168526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-168526: (2.168546638s)
--- PASS: TestKicStaticIP (25.58s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-902948 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-902948 --driver=docker  --container-runtime=crio: (20.284753171s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-905708 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-905708 --driver=docker  --container-runtime=crio: (20.894454224s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-902948
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-905708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-905708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-905708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-905708: (2.396702114s)
helpers_test.go:175: Cleaning up "first-902948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-902948
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-902948: (2.445635701s)
--- PASS: TestMinikubeProfile (47.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-538834 --memory=3072 --mount-string /tmp/TestMountStartserial2135046338/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-538834 --memory=3072 --mount-string /tmp/TestMountStartserial2135046338/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.642871492s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-538834 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-551338 --memory=3072 --mount-string /tmp/TestMountStartserial2135046338/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-551338 --memory=3072 --mount-string /tmp/TestMountStartserial2135046338/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.506582299s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-551338 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-538834 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-538834 --alsologtostderr -v=5: (1.707369736s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-551338 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-551338
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-551338: (1.250958078s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-551338
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-551338: (6.161306137s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-551338 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703945 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703945 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m30.913712523s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-703945 -- rollout status deployment/busybox: (2.577475262s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-7vmd5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-bgj55 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-7vmd5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-bgj55 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-7vmd5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-bgj55 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-7vmd5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-7vmd5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-bgj55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703945 -- exec busybox-7b57f96db7-bgj55 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-703945 -v=5 --alsologtostderr
E1020 12:28:43.471874   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-703945 -v=5 --alsologtostderr: (23.599943071s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-703945 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp testdata/cp-test.txt multinode-703945:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2181529437/001/cp-test_multinode-703945.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945:/home/docker/cp-test.txt multinode-703945-m02:/home/docker/cp-test_multinode-703945_multinode-703945-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m02 "sudo cat /home/docker/cp-test_multinode-703945_multinode-703945-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945:/home/docker/cp-test.txt multinode-703945-m03:/home/docker/cp-test_multinode-703945_multinode-703945-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m03 "sudo cat /home/docker/cp-test_multinode-703945_multinode-703945-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp testdata/cp-test.txt multinode-703945-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2181529437/001/cp-test_multinode-703945-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945-m02:/home/docker/cp-test.txt multinode-703945:/home/docker/cp-test_multinode-703945-m02_multinode-703945.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945 "sudo cat /home/docker/cp-test_multinode-703945-m02_multinode-703945.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945-m02:/home/docker/cp-test.txt multinode-703945-m03:/home/docker/cp-test_multinode-703945-m02_multinode-703945-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m03 "sudo cat /home/docker/cp-test_multinode-703945-m02_multinode-703945-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp testdata/cp-test.txt multinode-703945-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2181529437/001/cp-test_multinode-703945-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945-m03:/home/docker/cp-test.txt multinode-703945:/home/docker/cp-test_multinode-703945-m03_multinode-703945.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945 "sudo cat /home/docker/cp-test_multinode-703945-m03_multinode-703945.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 cp multinode-703945-m03:/home/docker/cp-test.txt multinode-703945-m02:/home/docker/cp-test_multinode-703945-m03_multinode-703945-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 ssh -n multinode-703945-m02 "sudo cat /home/docker/cp-test_multinode-703945-m03_multinode-703945-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-703945 node stop m03: (1.258507951s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703945 status: exit status 7 (491.727289ms)

                                                
                                                
-- stdout --
	multinode-703945
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-703945-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-703945-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr: exit status 7 (498.404028ms)

                                                
                                                
-- stdout --
	multinode-703945
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-703945-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-703945-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:29:15.130661  153893 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:29:15.130909  153893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:29:15.130916  153893 out.go:374] Setting ErrFile to fd 2...
	I1020 12:29:15.130921  153893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:29:15.131106  153893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:29:15.131289  153893 out.go:368] Setting JSON to false
	I1020 12:29:15.131318  153893 mustload.go:65] Loading cluster: multinode-703945
	I1020 12:29:15.131409  153893 notify.go:220] Checking for updates...
	I1020 12:29:15.132320  153893 config.go:182] Loaded profile config "multinode-703945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:29:15.132359  153893 status.go:174] checking status of multinode-703945 ...
	I1020 12:29:15.133595  153893 cli_runner.go:164] Run: docker container inspect multinode-703945 --format={{.State.Status}}
	I1020 12:29:15.152361  153893 status.go:371] multinode-703945 host status = "Running" (err=<nil>)
	I1020 12:29:15.152385  153893 host.go:66] Checking if "multinode-703945" exists ...
	I1020 12:29:15.152641  153893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-703945
	I1020 12:29:15.170895  153893 host.go:66] Checking if "multinode-703945" exists ...
	I1020 12:29:15.171187  153893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:29:15.171234  153893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-703945
	I1020 12:29:15.189879  153893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/multinode-703945/id_rsa Username:docker}
	I1020 12:29:15.288426  153893 ssh_runner.go:195] Run: systemctl --version
	I1020 12:29:15.295094  153893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:29:15.308111  153893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:29:15.365686  153893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-20 12:29:15.354315548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:29:15.366284  153893 kubeconfig.go:125] found "multinode-703945" server: "https://192.168.67.2:8443"
	I1020 12:29:15.366317  153893 api_server.go:166] Checking apiserver status ...
	I1020 12:29:15.366356  153893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:29:15.378588  153893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	W1020 12:29:15.387518  153893 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:29:15.387566  153893 ssh_runner.go:195] Run: ls
	I1020 12:29:15.391552  153893 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1020 12:29:15.395729  153893 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1020 12:29:15.395752  153893 status.go:463] multinode-703945 apiserver status = Running (err=<nil>)
	I1020 12:29:15.395762  153893 status.go:176] multinode-703945 status: &{Name:multinode-703945 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:29:15.395801  153893 status.go:174] checking status of multinode-703945-m02 ...
	I1020 12:29:15.396078  153893 cli_runner.go:164] Run: docker container inspect multinode-703945-m02 --format={{.State.Status}}
	I1020 12:29:15.413690  153893 status.go:371] multinode-703945-m02 host status = "Running" (err=<nil>)
	I1020 12:29:15.413713  153893 host.go:66] Checking if "multinode-703945-m02" exists ...
	I1020 12:29:15.413968  153893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-703945-m02
	I1020 12:29:15.432411  153893 host.go:66] Checking if "multinode-703945-m02" exists ...
	I1020 12:29:15.432645  153893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:29:15.432677  153893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-703945-m02
	I1020 12:29:15.451215  153893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21773-11075/.minikube/machines/multinode-703945-m02/id_rsa Username:docker}
	I1020 12:29:15.549206  153893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:29:15.562615  153893 status.go:176] multinode-703945-m02 status: &{Name:multinode-703945-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:29:15.562652  153893 status.go:174] checking status of multinode-703945-m03 ...
	I1020 12:29:15.562952  153893 cli_runner.go:164] Run: docker container inspect multinode-703945-m03 --format={{.State.Status}}
	I1020 12:29:15.582321  153893 status.go:371] multinode-703945-m03 host status = "Stopped" (err=<nil>)
	I1020 12:29:15.582351  153893 status.go:384] host is not running, skipping remaining checks
	I1020 12:29:15.582359  153893 status.go:176] multinode-703945-m03 status: &{Name:multinode-703945-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-703945 node start m03 -v=5 --alsologtostderr: (6.967822899s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703945
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-703945
E1020 12:29:25.371132   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-703945: (29.577811216s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703945 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703945 --wait=true -v=5 --alsologtostderr: (48.935172508s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703945
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-703945 node delete m03: (4.658000168s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-703945 stop: (30.165083693s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703945 status: exit status 7 (85.435409ms)

                                                
                                                
-- stdout --
	multinode-703945
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-703945-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr: exit status 7 (86.996837ms)

                                                
                                                
-- stdout --
	multinode-703945
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-703945-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:31:17.425387  163661 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:31:17.425510  163661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:31:17.425522  163661 out.go:374] Setting ErrFile to fd 2...
	I1020 12:31:17.425528  163661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:31:17.425745  163661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:31:17.425963  163661 out.go:368] Setting JSON to false
	I1020 12:31:17.425991  163661 mustload.go:65] Loading cluster: multinode-703945
	I1020 12:31:17.426066  163661 notify.go:220] Checking for updates...
	I1020 12:31:17.426572  163661 config.go:182] Loaded profile config "multinode-703945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:31:17.426592  163661 status.go:174] checking status of multinode-703945 ...
	I1020 12:31:17.427145  163661 cli_runner.go:164] Run: docker container inspect multinode-703945 --format={{.State.Status}}
	I1020 12:31:17.448248  163661 status.go:371] multinode-703945 host status = "Stopped" (err=<nil>)
	I1020 12:31:17.448270  163661 status.go:384] host is not running, skipping remaining checks
	I1020 12:31:17.448316  163661 status.go:176] multinode-703945 status: &{Name:multinode-703945 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:31:17.448339  163661 status.go:174] checking status of multinode-703945-m02 ...
	I1020 12:31:17.448583  163661 cli_runner.go:164] Run: docker container inspect multinode-703945-m02 --format={{.State.Status}}
	I1020 12:31:17.466406  163661 status.go:371] multinode-703945-m02 host status = "Stopped" (err=<nil>)
	I1020 12:31:17.466450  163661 status.go:384] host is not running, skipping remaining checks
	I1020 12:31:17.466460  163661 status.go:176] multinode-703945-m02 status: &{Name:multinode-703945-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703945 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703945 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.12887027s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703945 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703945
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703945-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-703945-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.304718ms)

                                                
                                                
-- stdout --
	* [multinode-703945-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-703945-m02' is duplicated with machine name 'multinode-703945-m02' in profile 'multinode-703945'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703945-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703945-m03 --driver=docker  --container-runtime=crio: (24.484664221s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-703945
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-703945: exit status 80 (283.022356ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-703945 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-703945-m03 already exists in multinode-703945-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-703945-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-703945-m03: (2.401895812s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.28s)

                                                
                                    
x
+
TestPreload (147.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-425816 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1020 12:33:43.469036   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-425816 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m31.427220295s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-425816 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-425816 image pull gcr.io/k8s-minikube/busybox: (1.58514484s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-425816
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-425816: (5.850483849s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-425816 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1020 12:34:25.370912   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-425816 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.286070359s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-425816 image list
helpers_test.go:175: Cleaning up "test-preload-425816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-425816
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-425816: (2.440003035s)
--- PASS: TestPreload (147.81s)

                                                
                                    
x
+
TestScheduledStopUnix (96.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-865115 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-865115 --memory=3072 --driver=docker  --container-runtime=crio: (20.438129508s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865115 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-865115 -n scheduled-stop-865115
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865115 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1020 12:35:29.405000   14592 retry.go:31] will retry after 101.813µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.406206   14592 retry.go:31] will retry after 176.256µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.407347   14592 retry.go:31] will retry after 131.853µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.408493   14592 retry.go:31] will retry after 235.633µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.409644   14592 retry.go:31] will retry after 740.052µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.410805   14592 retry.go:31] will retry after 978.715µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.411952   14592 retry.go:31] will retry after 862.649µs: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.413084   14592 retry.go:31] will retry after 2.11335ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.416286   14592 retry.go:31] will retry after 3.668356ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.420548   14592 retry.go:31] will retry after 5.05209ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.425767   14592 retry.go:31] will retry after 2.936374ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.429147   14592 retry.go:31] will retry after 6.127025ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.436389   14592 retry.go:31] will retry after 12.424857ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.449638   14592 retry.go:31] will retry after 20.171913ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.470959   14592 retry.go:31] will retry after 18.20351ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
I1020 12:35:29.490242   14592 retry.go:31] will retry after 29.385085ms: open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/scheduled-stop-865115/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865115 --cancel-scheduled
E1020 12:35:48.443951   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-865115 -n scheduled-stop-865115
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-865115
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865115 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-865115
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-865115: exit status 7 (68.280785ms)

                                                
                                                
-- stdout --
	scheduled-stop-865115
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-865115 -n scheduled-stop-865115
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-865115 -n scheduled-stop-865115: exit status 7 (72.123666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-865115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-865115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-865115: (4.425382849s)
--- PASS: TestScheduledStopUnix (96.28s)

                                                
                                    
x
+
TestInsufficientStorage (9.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-843580 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1020 12:36:46.541931   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-843580 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.103976077s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d96cb64-7eac-4920-bb09-0986a6941042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-843580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"35a9c4dd-06e3-45bb-aed6-405a410a25be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21773"}}
	{"specversion":"1.0","id":"39791a33-4f31-4087-b46e-55146050e15c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba164aab-ad91-403e-acbd-39e005d051ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig"}}
	{"specversion":"1.0","id":"4513fbfe-ea7e-46aa-aa21-ed8db65cc85f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube"}}
	{"specversion":"1.0","id":"8774b409-35f9-4e04-8739-807ff2bc7665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d9ec0a1b-9119-45fd-9416-fecf21518b87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"931882ea-029d-436a-88ea-0358c86c446f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"84ad3ad1-3a0a-419e-a75d-4a1c409daaf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b402c762-5a7c-4379-9ab7-fcce706905b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa83209e-818f-4505-9c7d-2e9db2b781d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"87ddcff5-b3a1-4572-b756-2605a2de4f83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-843580\" primary control-plane node in \"insufficient-storage-843580\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b425e512-3468-483d-ac15-5fa884515f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5212d62-2275-4917-ad6a-9fc4a0b3e756","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f84cfe4e-0346-4876-989e-c274879a8990","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-843580 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-843580 --output=json --layout=cluster: exit status 7 (284.961249ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-843580","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-843580","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1020 12:36:52.210599  184008 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-843580" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-843580 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-843580 --output=json --layout=cluster: exit status 7 (285.266917ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-843580","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-843580","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1020 12:36:52.495874  184121 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-843580" does not appear in /home/jenkins/minikube-integration/21773-11075/kubeconfig
	E1020 12:36:52.506830  184121 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/insufficient-storage-843580/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-843580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-843580
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-843580: (1.948771661s)
--- PASS: TestInsufficientStorage (9.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2944823530 start -p running-upgrade-942922 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2944823530 start -p running-upgrade-942922 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.239872648s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-942922 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-942922 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.359982924s)
helpers_test.go:175: Cleaning up "running-upgrade-942922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-942922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-942922: (2.455840189s)
--- PASS: TestRunningBinaryUpgrade (50.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (396.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.069487452s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-196539
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-196539: (1.318719171s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-196539 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-196539 status --format={{.Host}}: exit status 7 (74.644601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.048989453s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-196539 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (81.430932ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-196539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-196539
	    minikube start -p kubernetes-upgrade-196539 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1965392 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-196539 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-196539 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m35.693433003s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-196539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-196539
E1020 12:45:41.028997   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/old-k8s-version-384253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:45:43.507903   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-196539: (2.780174866s)
--- PASS: TestKubernetesUpgrade (396.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (71.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3175114274 start -p missing-upgrade-123936 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3175114274 start -p missing-upgrade-123936 --memory=3072 --driver=docker  --container-runtime=crio: (22.43240099s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-123936
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-123936: (1.743301589s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-123936
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-123936 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-123936 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.412570846s)
helpers_test.go:175: Cleaning up "missing-upgrade-123936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-123936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-123936: (2.493993449s)
--- PASS: TestMissingContainerUpgrade (71.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030682 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-030682 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (82.333465ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-030682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030682 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030682 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.368084097s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-030682 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1235350849 start -p stopped-upgrade-040813 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1235350849 start -p stopped-upgrade-040813 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.903622172s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1235350849 -p stopped-upgrade-040813 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1235350849 -p stopped-upgrade-040813 stop: (1.245268971s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-040813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-040813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.250648374s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030682 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030682 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.870063751s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-030682 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-030682 status -o json: exit status 2 (338.334148ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-030682","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-030682
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-030682: (2.36088904s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-040813
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-040813: (1.202561838s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (42.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-918853 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-918853 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.45766647s)
--- PASS: TestPause/serial/Start (42.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030682 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030682 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.80299335s)
--- PASS: TestNoKubernetes/serial/Start (10.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-030682 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-030682 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.520756ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (2.140698992s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.380503584s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-030682
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-030682: (1.257826635s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030682 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030682 --driver=docker  --container-runtime=crio: (6.935411484s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-312375 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-312375 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (158.50225ms)

                                                
                                                
-- stdout --
	* [false-312375] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:38:29.617946  213280 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:38:29.618205  213280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:29.618215  213280 out.go:374] Setting ErrFile to fd 2...
	I1020 12:38:29.618219  213280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:29.618613  213280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-11075/.minikube/bin
	I1020 12:38:29.619196  213280 out.go:368] Setting JSON to false
	I1020 12:38:29.620393  213280 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4859,"bootTime":1760959051,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:38:29.620497  213280 start.go:141] virtualization: kvm guest
	I1020 12:38:29.622761  213280 out.go:179] * [false-312375] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:38:29.624160  213280 notify.go:220] Checking for updates...
	I1020 12:38:29.624194  213280 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:38:29.625900  213280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:38:29.627316  213280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-11075/kubeconfig
	I1020 12:38:29.628742  213280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-11075/.minikube
	I1020 12:38:29.630142  213280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:38:29.631522  213280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:38:29.633188  213280 config.go:182] Loaded profile config "NoKubernetes-030682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1020 12:38:29.633280  213280 config.go:182] Loaded profile config "missing-upgrade-123936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1020 12:38:29.633357  213280 config.go:182] Loaded profile config "pause-918853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:29.633433  213280 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:38:29.659270  213280 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1020 12:38:29.659377  213280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:38:29.720480  213280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-20 12:38:29.70991613 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1020 12:38:29.720590  213280 docker.go:318] overlay module found
	I1020 12:38:29.722410  213280 out.go:179] * Using the docker driver based on user configuration
	I1020 12:38:29.723706  213280 start.go:305] selected driver: docker
	I1020 12:38:29.723723  213280 start.go:925] validating driver "docker" against <nil>
	I1020 12:38:29.723735  213280 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:38:29.725537  213280 out.go:203] 
	W1020 12:38:29.726757  213280 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1020 12:38:29.728085  213280 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-312375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-312375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-123936
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-918853
contexts:
- context:
cluster: missing-upgrade-123936
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-123936
name: missing-upgrade-123936
- context:
cluster: pause-918853
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-918853
name: pause-918853
current-context: ""
kind: Config
users:
- name: missing-upgrade-123936
user:
client-certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/missing-upgrade-123936/client.crt
client-key: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/missing-upgrade-123936/client.key
- name: pause-918853
user:
client-certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt
client-key: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-312375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312375"

                                                
                                                
----------------------- debugLogs end: false-312375 [took: 2.873874701s] --------------------------------
helpers_test.go:175: Cleaning up "false-312375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-312375
--- PASS: TestNetworkPlugins/group/false (3.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-030682 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-030682 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.990858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (11.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1020 12:38:43.468407   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/addons-053741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-918853 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (11.862678745s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (11.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1020 12:39:25.371432   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.336149853s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.228675118s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-384253 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d9370c1f-a3cd-4443-a78d-24bb86844f37] Pending
helpers_test.go:352: "busybox" [d9370c1f-a3cd-4443-a78d-24bb86844f37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d9370c1f-a3cd-4443-a78d-24bb86844f37] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004245669s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-384253 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-384253 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-384253 --alsologtostderr -v=3: (15.965988774s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-649841 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [45dbbb45-578b-4f3e-a055-b8e545812159] Pending
helpers_test.go:352: "busybox" [45dbbb45-578b-4f3e-a055-b8e545812159] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [45dbbb45-578b-4f3e-a055-b8e545812159] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003733234s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-649841 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253: exit status 7 (68.574737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-384253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-384253 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.814882658s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384253 -n old-k8s-version-384253
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-649841 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-649841 --alsologtostderr -v=3: (16.250654454s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841: exit status 7 (66.600372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-649841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-649841 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.884194141s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-649841 -n no-preload-649841
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cvpnn" [3b04a5b6-792d-4f4a-9bc5-1880c814dee0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003629241s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cvpnn" [3b04a5b6-792d-4f4a-9bc5-1880c814dee0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004134959s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-384253 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384253 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (38.91994708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-48d7f" [fec5bad0-dbb2-4040-ada9-4839502e4521] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003792624s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-48d7f" [fec5bad0-dbb2-4040-ada9-4839502e4521] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003341531s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-649841 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-649841 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.136775885s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [13ae6f85-639e-44b4-aa3b-abfc21397973] Pending
helpers_test.go:352: "busybox" [13ae6f85-639e-44b4-aa3b-abfc21397973] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [13ae6f85-639e-44b4-aa3b-abfc21397973] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004472035s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.915822714s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-874012 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-874012 --alsologtostderr -v=3: (18.153216935s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-916479 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-916479 --alsologtostderr -v=3: (2.576310889s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479: exit status 7 (77.239854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-916479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-916479 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.446178701s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-916479 -n newest-cni-916479
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-916479 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012: exit status 7 (104.257425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-874012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-874012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.739968621s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874012 -n default-k8s-diff-port-874012
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.078796541s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-907116 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b456bfa2-8544-4ae8-928b-cf120271b15c] Pending
helpers_test.go:352: "busybox" [b456bfa2-8544-4ae8-928b-cf120271b15c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b456bfa2-8544-4ae8-928b-cf120271b15c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004048601s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-907116 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-907116 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-907116 --alsologtostderr -v=3: (16.363756777s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116: exit status 7 (69.500717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-907116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-907116 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.282609795s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-907116 -n embed-certs-907116
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p7w4b" [5bed4e77-d51d-4392-adf0-69a3e5538205] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003614793s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-312375 "pgrep -a kubelet"
I1020 12:43:29.563199   14592 config.go:182] Loaded profile config "auto-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-312375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nb7h4" [51f58e44-8966-4162-898a-270d8912cc96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nb7h4" [51f58e44-8966-4162-898a-270d8912cc96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004559664s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p7w4b" [5bed4e77-d51d-4392-adf0-69a3e5538205] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002957653s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-874012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-312375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-874012 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.692426784s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.523515005s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hm4nh" [6db5786f-d83b-4956-90ac-a5bcfce24fb6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003777264s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hm4nh" [6db5786f-d83b-4956-90ac-a5bcfce24fb6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003915476s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-907116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-907116 image list --format=json
E1020 12:44:25.371367   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/functional-012564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lgpcv" [6b02e28d-303a-41dd-8e77-99e1e17c7502] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004233834s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m39.978661609s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-312375 "pgrep -a kubelet"
I1020 12:44:36.762196   14592 config.go:182] Loaded profile config "kindnet-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-312375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lht26" [74eb8e58-e547-42d8-978c-a7c4bffc5f30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lht26" [74eb8e58-e547-42d8-978c-a7c4bffc5f30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004447021s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-312375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-qh855" [68955588-14b4-4016-8480-afa0d50b78e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004520566s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1020 12:45:10.305027   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/old-k8s-version-384253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m1.439619253s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-312375 "pgrep -a kubelet"
I1020 12:45:13.182889   14592 config.go:182] Loaded profile config "calico-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-312375 replace --force -f testdata/netcat-deployment.yaml
I1020 12:45:13.751460   14592 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1020 12:45:13.887706   14592 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-49rlf" [17ea7874-69bd-4a36-b29d-b10bea100b98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-49rlf" [17ea7874-69bd-4a36-b29d-b10bea100b98] Running
E1020 12:45:20.547208   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/old-k8s-version-384253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003623195s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-312375 exec deployment/netcat -- nslookup kubernetes.default
E1020 12:45:23.009840   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:45:23.017265   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1020 12:45:23.029152   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:45:23.050552   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:45:23.092079   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1020 12:45:23.174382   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.884831089s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1020 12:46:03.989326   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-312375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.577521058s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-312375 "pgrep -a kubelet"
I1020 12:46:10.384798   14592 config.go:182] Loaded profile config "bridge-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-312375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6js7g" [b833e6c4-11c7-480b-8ee2-590acee76c8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6js7g" [b833e6c4-11c7-480b-8ee2-590acee76c8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004479705s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-312375 "pgrep -a kubelet"
I1020 12:46:14.503230   14592 config.go:182] Loaded profile config "custom-flannel-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-312375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9rgwm" [d04461be-de81-4f95-b4a3-943184cf63ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9rgwm" [d04461be-de81-4f95-b4a3-943184cf63ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.005643801s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-312375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-312375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-k5mjs" [dbc4bd15-9477-4160-9360-15ba09c8f778] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003800193s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-312375 "pgrep -a kubelet"
I1020 12:46:40.751921   14592 config.go:182] Loaded profile config "flannel-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-312375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c426n" [9f15f077-b859-40b6-8470-160791174402] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c426n" [9f15f077-b859-40b6-8470-160791174402] Running
E1020 12:46:44.951559   14592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/no-preload-649841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003669942s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-312375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-312375 "pgrep -a kubelet"
I1020 12:46:55.641943   14592 config.go:182] Loaded profile config "enable-default-cni-312375": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-312375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vs4cs" [451c00cb-155c-45ab-a7f5-453bd52d41ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vs4cs" [451c00cb-155c-45ab-a7f5-453bd52d41ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00331136s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-312375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-312375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-796609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-796609
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-312375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-312375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-123936
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-918853
contexts:
- context:
cluster: missing-upgrade-123936
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-123936
name: missing-upgrade-123936
- context:
cluster: pause-918853
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-918853
name: pause-918853
current-context: ""
kind: Config
users:
- name: missing-upgrade-123936
user:
client-certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/missing-upgrade-123936/client.crt
client-key: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/missing-upgrade-123936/client.key
- name: pause-918853
user:
client-certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt
client-key: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-312375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312375"

                                                
                                                
----------------------- debugLogs end: kubenet-312375 [took: 2.945087213s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-312375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-312375
--- SKIP: TestNetworkPlugins/group/kubenet (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-312375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-312375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-123936
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-11075/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-918853
contexts:
- context:
cluster: missing-upgrade-123936
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-123936
name: missing-upgrade-123936
- context:
cluster: pause-918853
extensions:
- extension:
last-update: Mon, 20 Oct 2025 12:38:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-918853
name: pause-918853
current-context: ""
kind: Config
users:
- name: missing-upgrade-123936
user:
client-certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/missing-upgrade-123936/client.crt
client-key: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/missing-upgrade-123936/client.key
- name: pause-918853
user:
client-certificate: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.crt
client-key: /home/jenkins/minikube-integration/21773-11075/.minikube/profiles/pause-918853/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-312375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-312375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312375"

                                                
                                                
----------------------- debugLogs end: cilium-312375 [took: 3.412755382s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-312375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-312375
I1020 12:38:36.280638   14592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3156273520/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1020 12:38:36.300473   14592 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3156273520/001/docker-machine-driver-kvm2 version is 1.37.0
--- SKIP: TestNetworkPlugins/group/cilium (3.58s)

                                                
                                    
Copied to clipboard